Test Report: Docker_Linux_containerd_arm64 19690

                    
                      f8db61c9b74e1fc8d4208c01add19855c5953b45:2024-09-23:36339
                    
                

Test fail (2/327)

Order failed test Duration
29 TestAddons/serial/Volcano 200.16
351 TestStartStop/group/old-k8s-version/serial/SecondStart 374.76
x
+
TestAddons/serial/Volcano (200.16s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 53.651823ms
addons_test.go:835: volcano-scheduler stabilized in 53.799151ms
addons_test.go:843: volcano-admission stabilized in 53.857299ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-h8fw7" [7748ce4f-bb90-4238-8c0c-8055f41dccee] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004833356s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-fs748" [6aa15c55-d131-4bc2-b3cb-1f8af032b455] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004444573s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-zc2sm" [a57d3669-5174-436f-adbb-d81e7fa76d7d] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00319329s
addons_test.go:870: (dbg) Run:  kubectl --context addons-095355 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-095355 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-095355 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ca2cb932-2c2b-4333-b875-278a05312a5b] Pending
helpers_test.go:344: "test-job-nginx-0" [ca2cb932-2c2b-4333-b875-278a05312a5b] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:902: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:902: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-095355 -n addons-095355
addons_test.go:902: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-23 13:30:59.959310953 +0000 UTC m=+481.279674554
addons_test.go:902: (dbg) Run:  kubectl --context addons-095355 describe po test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-095355 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-4cfaadb3-ae94-4b8d-8ad7-8672fef17b6e
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4cpkf (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-4cpkf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:902: (dbg) Run:  kubectl --context addons-095355 logs test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-095355 logs test-job-nginx-0 -n my-volcano:
addons_test.go:903: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-095355
helpers_test.go:235: (dbg) docker inspect addons-095355:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f475e868c0233acc01f7c52fb8e0d81fbfd84b1cee2696fc7da5bf681d7efc74",
	        "Created": "2024-09-23T13:23:40.641666198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1034868,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T13:23:40.763105221Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/f475e868c0233acc01f7c52fb8e0d81fbfd84b1cee2696fc7da5bf681d7efc74/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f475e868c0233acc01f7c52fb8e0d81fbfd84b1cee2696fc7da5bf681d7efc74/hostname",
	        "HostsPath": "/var/lib/docker/containers/f475e868c0233acc01f7c52fb8e0d81fbfd84b1cee2696fc7da5bf681d7efc74/hosts",
	        "LogPath": "/var/lib/docker/containers/f475e868c0233acc01f7c52fb8e0d81fbfd84b1cee2696fc7da5bf681d7efc74/f475e868c0233acc01f7c52fb8e0d81fbfd84b1cee2696fc7da5bf681d7efc74-json.log",
	        "Name": "/addons-095355",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-095355:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-095355",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d919247aa97d324025e2b1e66ff7e4a238a480a7c48299ca8cfb932b5e4060b9-init/diff:/var/lib/docker/overlay2/1bc43114731848917669438134af7ba5a2b2d3064205845371927727bb2fadd6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d919247aa97d324025e2b1e66ff7e4a238a480a7c48299ca8cfb932b5e4060b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d919247aa97d324025e2b1e66ff7e4a238a480a7c48299ca8cfb932b5e4060b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d919247aa97d324025e2b1e66ff7e4a238a480a7c48299ca8cfb932b5e4060b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-095355",
	                "Source": "/var/lib/docker/volumes/addons-095355/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-095355",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-095355",
	                "name.minikube.sigs.k8s.io": "addons-095355",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ea527bc3efe88d6b01eafe1420434e766ad85deacf777ccfa9a7d8598f21ef7",
	            "SandboxKey": "/var/run/docker/netns/8ea527bc3efe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41452"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41453"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41456"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41454"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41455"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-095355": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d43cef50d0121469716fb662c9fc67441136b4c05382f4c88d922c16d4b91a43",
	                    "EndpointID": "2d6cf8c0b663debd8249797808c637401125b69dfdb4ec516259dd25a18d1185",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-095355",
	                        "f475e868c023"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-095355 -n addons-095355
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-095355 logs -n 25: (1.650492855s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-234829   | jenkins | v1.34.0 | 23 Sep 24 13:22 UTC |                     |
	|         | -p download-only-234829              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	| delete  | -p download-only-234829              | download-only-234829   | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	| start   | -o=json --download-only              | download-only-021106   | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC |                     |
	|         | -p download-only-021106              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	| delete  | -p download-only-021106              | download-only-021106   | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	| delete  | -p download-only-234829              | download-only-234829   | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	| delete  | -p download-only-021106              | download-only-021106   | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	| start   | --download-only -p                   | download-docker-290856 | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC |                     |
	|         | download-docker-290856               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-290856            | download-docker-290856 | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	| start   | --download-only -p                   | binary-mirror-015618   | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC |                     |
	|         | binary-mirror-015618                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38111               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-015618              | binary-mirror-015618   | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	| addons  | enable dashboard -p                  | addons-095355          | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC |                     |
	|         | addons-095355                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-095355          | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC |                     |
	|         | addons-095355                        |                        |         |         |                     |                     |
	| start   | -p addons-095355 --wait=true         | addons-095355          | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:27 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:23:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:23:16.225313 1034381 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:23:16.225442 1034381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:23:16.225574 1034381 out.go:358] Setting ErrFile to fd 2...
	I0923 13:23:16.225733 1034381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:23:16.226039 1034381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	I0923 13:23:16.226551 1034381 out.go:352] Setting JSON to false
	I0923 13:23:16.227514 1034381 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":155143,"bootTime":1726942654,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0923 13:23:16.227589 1034381 start.go:139] virtualization:  
	I0923 13:23:16.230735 1034381 out.go:177] * [addons-095355] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:23:16.234307 1034381 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:23:16.234420 1034381 notify.go:220] Checking for updates...
	I0923 13:23:16.239662 1034381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:23:16.242320 1034381 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 13:23:16.244869 1034381 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	I0923 13:23:16.247539 1034381 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:23:16.250059 1034381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:23:16.252816 1034381 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:23:16.276706 1034381 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:23:16.276838 1034381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:23:16.329952 1034381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:23:16.320638845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:23:16.330073 1034381 docker.go:318] overlay module found
	I0923 13:23:16.337257 1034381 out.go:177] * Using the docker driver based on user configuration
	I0923 13:23:16.339612 1034381 start.go:297] selected driver: docker
	I0923 13:23:16.339629 1034381 start.go:901] validating driver "docker" against <nil>
	I0923 13:23:16.339652 1034381 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:23:16.340286 1034381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:23:16.399873 1034381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:23:16.390974856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:23:16.400092 1034381 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:23:16.400323 1034381 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:23:16.402698 1034381 out.go:177] * Using Docker driver with root privileges
	I0923 13:23:16.405036 1034381 cni.go:84] Creating CNI manager for ""
	I0923 13:23:16.405100 1034381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 13:23:16.405115 1034381 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 13:23:16.405201 1034381 start.go:340] cluster config:
	{Name:addons-095355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-095355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:23:16.407776 1034381 out.go:177] * Starting "addons-095355" primary control-plane node in "addons-095355" cluster
	I0923 13:23:16.410186 1034381 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 13:23:16.412651 1034381 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:23:16.415233 1034381 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 13:23:16.415291 1034381 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0923 13:23:16.415303 1034381 cache.go:56] Caching tarball of preloaded images
	I0923 13:23:16.415308 1034381 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:23:16.415433 1034381 preload.go:172] Found /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 13:23:16.415445 1034381 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0923 13:23:16.415816 1034381 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/config.json ...
	I0923 13:23:16.415853 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/config.json: {Name:mk80c4b7a09925d41fa1eb5e2e16ca89686dbedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:16.430662 1034381 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:23:16.430784 1034381 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 13:23:16.430804 1034381 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 13:23:16.430808 1034381 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 13:23:16.430815 1034381 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 13:23:16.430820 1034381 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 13:23:33.508691 1034381 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 13:23:33.508731 1034381 cache.go:194] Successfully downloaded all kic artifacts
	I0923 13:23:33.508761 1034381 start.go:360] acquireMachinesLock for addons-095355: {Name:mk78ca0e9791e1b10b11d6132d269e16de238534 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:23:33.509341 1034381 start.go:364] duration metric: took 556.909µs to acquireMachinesLock for "addons-095355"
	I0923 13:23:33.509375 1034381 start.go:93] Provisioning new machine with config: &{Name:addons-095355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-095355 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 13:23:33.509462 1034381 start.go:125] createHost starting for "" (driver="docker")
	I0923 13:23:33.511759 1034381 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 13:23:33.512021 1034381 start.go:159] libmachine.API.Create for "addons-095355" (driver="docker")
	I0923 13:23:33.512055 1034381 client.go:168] LocalClient.Create starting
	I0923 13:23:33.512189 1034381 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem
	I0923 13:23:34.016693 1034381 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/cert.pem
	I0923 13:23:34.540685 1034381 cli_runner.go:164] Run: docker network inspect addons-095355 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 13:23:34.554643 1034381 cli_runner.go:211] docker network inspect addons-095355 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 13:23:34.554732 1034381 network_create.go:284] running [docker network inspect addons-095355] to gather additional debugging logs...
	I0923 13:23:34.554756 1034381 cli_runner.go:164] Run: docker network inspect addons-095355
	W0923 13:23:34.569438 1034381 cli_runner.go:211] docker network inspect addons-095355 returned with exit code 1
	I0923 13:23:34.569471 1034381 network_create.go:287] error running [docker network inspect addons-095355]: docker network inspect addons-095355: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-095355 not found
	I0923 13:23:34.569494 1034381 network_create.go:289] output of [docker network inspect addons-095355]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-095355 not found
	
	** /stderr **
	I0923 13:23:34.569599 1034381 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:23:34.585867 1034381 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001da9a40}
	I0923 13:23:34.585909 1034381 network_create.go:124] attempt to create docker network addons-095355 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 13:23:34.585971 1034381 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-095355 addons-095355
	I0923 13:23:34.652359 1034381 network_create.go:108] docker network addons-095355 192.168.49.0/24 created
	I0923 13:23:34.652391 1034381 kic.go:121] calculated static IP "192.168.49.2" for the "addons-095355" container
	I0923 13:23:34.652460 1034381 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 13:23:34.665380 1034381 cli_runner.go:164] Run: docker volume create addons-095355 --label name.minikube.sigs.k8s.io=addons-095355 --label created_by.minikube.sigs.k8s.io=true
	I0923 13:23:34.681766 1034381 oci.go:103] Successfully created a docker volume addons-095355
	I0923 13:23:34.681862 1034381 cli_runner.go:164] Run: docker run --rm --name addons-095355-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-095355 --entrypoint /usr/bin/test -v addons-095355:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 13:23:36.670036 1034381 cli_runner.go:217] Completed: docker run --rm --name addons-095355-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-095355 --entrypoint /usr/bin/test -v addons-095355:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (1.988118433s)
	I0923 13:23:36.670065 1034381 oci.go:107] Successfully prepared a docker volume addons-095355
	I0923 13:23:36.670094 1034381 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 13:23:36.670113 1034381 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 13:23:36.670177 1034381 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-095355:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 13:23:40.578273 1034381 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-095355:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.908055191s)
	I0923 13:23:40.578304 1034381 kic.go:203] duration metric: took 3.908188127s to extract preloaded images to volume ...
	W0923 13:23:40.578456 1034381 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 13:23:40.578585 1034381 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 13:23:40.627997 1034381 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-095355 --name addons-095355 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-095355 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-095355 --network addons-095355 --ip 192.168.49.2 --volume addons-095355:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 13:23:40.936194 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Running}}
	I0923 13:23:40.957720 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:23:40.979464 1034381 cli_runner.go:164] Run: docker exec addons-095355 stat /var/lib/dpkg/alternatives/iptables
	I0923 13:23:41.029335 1034381 oci.go:144] the created container "addons-095355" has a running status.
	I0923 13:23:41.029363 1034381 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa...
	I0923 13:23:41.946393 1034381 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 13:23:41.966688 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:23:41.983746 1034381 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 13:23:41.983765 1034381 kic_runner.go:114] Args: [docker exec --privileged addons-095355 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 13:23:42.048145 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:23:42.066729 1034381 machine.go:93] provisionDockerMachine start ...
	I0923 13:23:42.066829 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:23:42.085545 1034381 main.go:141] libmachine: Using SSH client type: native
	I0923 13:23:42.085837 1034381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41452 <nil> <nil>}
	I0923 13:23:42.085848 1034381 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:23:42.219307 1034381 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-095355
	
	I0923 13:23:42.219364 1034381 ubuntu.go:169] provisioning hostname "addons-095355"
	I0923 13:23:42.219436 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:23:42.238871 1034381 main.go:141] libmachine: Using SSH client type: native
	I0923 13:23:42.239120 1034381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41452 <nil> <nil>}
	I0923 13:23:42.239139 1034381 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-095355 && echo "addons-095355" | sudo tee /etc/hostname
	I0923 13:23:42.382611 1034381 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-095355
	
	I0923 13:23:42.382699 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:23:42.399226 1034381 main.go:141] libmachine: Using SSH client type: native
	I0923 13:23:42.399483 1034381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41452 <nil> <nil>}
	I0923 13:23:42.399500 1034381 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-095355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-095355/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-095355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:23:42.531172 1034381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:23:42.531202 1034381 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19690-1028234/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-1028234/.minikube}
	I0923 13:23:42.531220 1034381 ubuntu.go:177] setting up certificates
	I0923 13:23:42.531229 1034381 provision.go:84] configureAuth start
	I0923 13:23:42.531295 1034381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-095355
	I0923 13:23:42.547514 1034381 provision.go:143] copyHostCerts
	I0923 13:23:42.547597 1034381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.pem (1082 bytes)
	I0923 13:23:42.547721 1034381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-1028234/.minikube/cert.pem (1123 bytes)
	I0923 13:23:42.547825 1034381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-1028234/.minikube/key.pem (1675 bytes)
	I0923 13:23:42.547874 1034381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca-key.pem org=jenkins.addons-095355 san=[127.0.0.1 192.168.49.2 addons-095355 localhost minikube]
	I0923 13:23:43.574826 1034381 provision.go:177] copyRemoteCerts
	I0923 13:23:43.574918 1034381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:23:43.574975 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:23:43.591174 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:23:43.687970 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 13:23:43.711134 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 13:23:43.735017 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 13:23:43.758920 1034381 provision.go:87] duration metric: took 1.227675676s to configureAuth
	I0923 13:23:43.758950 1034381 ubuntu.go:193] setting minikube options for container-runtime
	I0923 13:23:43.759132 1034381 config.go:182] Loaded profile config "addons-095355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 13:23:43.759145 1034381 machine.go:96] duration metric: took 1.692397643s to provisionDockerMachine
	I0923 13:23:43.759152 1034381 client.go:171] duration metric: took 10.247089099s to LocalClient.Create
	I0923 13:23:43.759180 1034381 start.go:167] duration metric: took 10.247161474s to libmachine.API.Create "addons-095355"
	I0923 13:23:43.759192 1034381 start.go:293] postStartSetup for "addons-095355" (driver="docker")
	I0923 13:23:43.759202 1034381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:23:43.759269 1034381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:23:43.759314 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:23:43.775539 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:23:43.872335 1034381 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:23:43.875368 1034381 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 13:23:43.875407 1034381 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 13:23:43.875425 1034381 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 13:23:43.875436 1034381 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 13:23:43.875446 1034381 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-1028234/.minikube/addons for local assets ...
	I0923 13:23:43.875512 1034381 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-1028234/.minikube/files for local assets ...
	I0923 13:23:43.875540 1034381 start.go:296] duration metric: took 116.341698ms for postStartSetup
	I0923 13:23:43.875867 1034381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-095355
	I0923 13:23:43.892611 1034381 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/config.json ...
	I0923 13:23:43.892904 1034381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:23:43.892955 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:23:43.908838 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:23:44.002585 1034381 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 13:23:44.008201 1034381 start.go:128] duration metric: took 10.498717828s to createHost
	I0923 13:23:44.008270 1034381 start.go:83] releasing machines lock for "addons-095355", held for 10.498914278s
	I0923 13:23:44.008384 1034381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-095355
	I0923 13:23:44.024711 1034381 ssh_runner.go:195] Run: cat /version.json
	I0923 13:23:44.024762 1034381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:23:44.024888 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:23:44.024765 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:23:44.046503 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:23:44.054683 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:23:44.142748 1034381 ssh_runner.go:195] Run: systemctl --version
	I0923 13:23:44.277284 1034381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:23:44.281607 1034381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 13:23:44.306951 1034381 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 13:23:44.307103 1034381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:23:44.338511 1034381 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 13:23:44.338549 1034381 start.go:495] detecting cgroup driver to use...
	I0923 13:23:44.338599 1034381 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 13:23:44.338668 1034381 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 13:23:44.351271 1034381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:23:44.363304 1034381 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:23:44.363433 1034381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:23:44.378366 1034381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:23:44.392973 1034381 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:23:44.480371 1034381 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:23:44.583676 1034381 docker.go:233] disabling docker service ...
	I0923 13:23:44.583777 1034381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:23:44.606068 1034381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:23:44.620874 1034381 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:23:44.718907 1034381 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:23:44.809266 1034381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:23:44.821010 1034381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:23:44.837952 1034381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 13:23:44.848735 1034381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 13:23:44.858875 1034381 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 13:23:44.858949 1034381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 13:23:44.869002 1034381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:23:44.878543 1034381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 13:23:44.888040 1034381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:23:44.897894 1034381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:23:44.906980 1034381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 13:23:44.916605 1034381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 13:23:44.926237 1034381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 13:23:44.935896 1034381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:23:44.944403 1034381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:23:44.952896 1034381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:23:45.043673 1034381 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 13:23:45.181305 1034381 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0923 13:23:45.181494 1034381 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0923 13:23:45.185962 1034381 start.go:563] Will wait 60s for crictl version
	I0923 13:23:45.186039 1034381 ssh_runner.go:195] Run: which crictl
	I0923 13:23:45.190002 1034381 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:23:45.239381 1034381 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0923 13:23:45.239476 1034381 ssh_runner.go:195] Run: containerd --version
	I0923 13:23:45.263680 1034381 ssh_runner.go:195] Run: containerd --version
	I0923 13:23:45.298297 1034381 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0923 13:23:45.301184 1034381 cli_runner.go:164] Run: docker network inspect addons-095355 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:23:45.317906 1034381 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 13:23:45.321860 1034381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:23:45.333213 1034381 kubeadm.go:883] updating cluster {Name:addons-095355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-095355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:23:45.333347 1034381 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 13:23:45.333413 1034381 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:23:45.372795 1034381 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 13:23:45.372823 1034381 containerd.go:534] Images already preloaded, skipping extraction
	I0923 13:23:45.372888 1034381 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:23:45.412060 1034381 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 13:23:45.412085 1034381 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:23:45.412093 1034381 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0923 13:23:45.412227 1034381 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-095355 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-095355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:23:45.412320 1034381 ssh_runner.go:195] Run: sudo crictl info
	I0923 13:23:45.448728 1034381 cni.go:84] Creating CNI manager for ""
	I0923 13:23:45.448753 1034381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 13:23:45.448764 1034381 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:23:45.448819 1034381 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-095355 NodeName:addons-095355 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:23:45.449001 1034381 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-095355"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:23:45.449078 1034381 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:23:45.458282 1034381 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:23:45.458359 1034381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:23:45.467129 1034381 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0923 13:23:45.485628 1034381 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:23:45.504062 1034381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0923 13:23:45.521964 1034381 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 13:23:45.525346 1034381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:23:45.535931 1034381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:23:45.628983 1034381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:23:45.645076 1034381 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355 for IP: 192.168.49.2
	I0923 13:23:45.645100 1034381 certs.go:194] generating shared ca certs ...
	I0923 13:23:45.645117 1034381 certs.go:226] acquiring lock for ca certs: {Name:mk03d32b578b2438d161be017440f804f69b681b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:45.645943 1034381 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.key
	I0923 13:23:46.216285 1034381 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.crt ...
	I0923 13:23:46.216332 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.crt: {Name:mk250363647b94c1372f4fb34195d6f53da0877c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:46.217234 1034381 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.key ...
	I0923 13:23:46.217254 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.key: {Name:mk9332834906e25294036acbb21f525856283781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:46.217932 1034381 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.key
	I0923 13:23:46.477518 1034381 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.crt ...
	I0923 13:23:46.477547 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.crt: {Name:mk37d81801fe8e7da9da5816387bdbfecd54a498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:46.477737 1034381 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.key ...
	I0923 13:23:46.477751 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.key: {Name:mk101634078177f2b0a305be1893f7c773412cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:46.478481 1034381 certs.go:256] generating profile certs ...
	I0923 13:23:46.478557 1034381 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.key
	I0923 13:23:46.478586 1034381 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt with IP's: []
	I0923 13:23:46.799223 1034381 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt ...
	I0923 13:23:46.799257 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: {Name:mkeee1b4fb62767301490e924d98092cebc09260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:46.799471 1034381 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.key ...
	I0923 13:23:46.799486 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.key: {Name:mke2692fa8c0a6ca507309f4c6fac21b52abba03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:46.799586 1034381 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.key.90a19c51
	I0923 13:23:46.799609 1034381 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.crt.90a19c51 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 13:23:46.979383 1034381 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.crt.90a19c51 ...
	I0923 13:23:46.979414 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.crt.90a19c51: {Name:mkdebe221bab0808819e6b7596663da0b0f5e5e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:46.979602 1034381 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.key.90a19c51 ...
	I0923 13:23:46.979617 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.key.90a19c51: {Name:mk03e0e87de5c3ada9afa7b428511ceb311cb81d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:46.980443 1034381 certs.go:381] copying /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.crt.90a19c51 -> /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.crt
	I0923 13:23:46.980530 1034381 certs.go:385] copying /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.key.90a19c51 -> /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.key
	I0923 13:23:46.980589 1034381 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/proxy-client.key
	I0923 13:23:46.980609 1034381 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/proxy-client.crt with IP's: []
	I0923 13:23:47.369693 1034381 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/proxy-client.crt ...
	I0923 13:23:47.369725 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/proxy-client.crt: {Name:mk0122367dc89d5a378cadadd81122434cb43acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:47.369914 1034381 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/proxy-client.key ...
	I0923 13:23:47.369928 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/proxy-client.key: {Name:mkd51de215e8cfcfe21ca03534344bb5925f91e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:23:47.370757 1034381 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 13:23:47.370803 1034381 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem (1082 bytes)
	I0923 13:23:47.370831 1034381 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:23:47.370860 1034381 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/key.pem (1675 bytes)
	I0923 13:23:47.371538 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:23:47.396325 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:23:47.420532 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:23:47.444605 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:23:47.468925 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 13:23:47.493418 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 13:23:47.518067 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:23:47.544544 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 13:23:47.569721 1034381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:23:47.594063 1034381 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:23:47.612332 1034381 ssh_runner.go:195] Run: openssl version
	I0923 13:23:47.617933 1034381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:23:47.627411 1034381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:23:47.630868 1034381 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 13:23 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:23:47.630959 1034381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:23:47.637852 1034381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:23:47.647506 1034381 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:23:47.650808 1034381 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:23:47.650873 1034381 kubeadm.go:392] StartCluster: {Name:addons-095355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-095355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:23:47.650958 1034381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0923 13:23:47.651014 1034381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:23:47.689013 1034381 cri.go:89] found id: ""
	I0923 13:23:47.689086 1034381 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 13:23:47.697835 1034381 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 13:23:47.706820 1034381 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 13:23:47.706893 1034381 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 13:23:47.716186 1034381 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:23:47.716208 1034381 kubeadm.go:157] found existing configuration files:
	
	I0923 13:23:47.716265 1034381 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 13:23:47.725363 1034381 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:23:47.725436 1034381 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 13:23:47.734499 1034381 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 13:23:47.743850 1034381 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:23:47.743941 1034381 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 13:23:47.752629 1034381 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 13:23:47.761970 1034381 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:23:47.762036 1034381 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 13:23:47.771011 1034381 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 13:23:47.781083 1034381 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:23:47.781148 1034381 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 13:23:47.790073 1034381 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 13:23:47.834417 1034381 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 13:23:47.834766 1034381 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 13:23:47.853582 1034381 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 13:23:47.853658 1034381 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 13:23:47.853707 1034381 kubeadm.go:310] OS: Linux
	I0923 13:23:47.853772 1034381 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 13:23:47.853828 1034381 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 13:23:47.853880 1034381 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 13:23:47.853930 1034381 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 13:23:47.853982 1034381 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 13:23:47.854035 1034381 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 13:23:47.854093 1034381 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 13:23:47.854144 1034381 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 13:23:47.854193 1034381 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 13:23:47.913050 1034381 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 13:23:47.913164 1034381 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 13:23:47.913260 1034381 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 13:23:47.918429 1034381 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 13:23:47.923527 1034381 out.go:235]   - Generating certificates and keys ...
	I0923 13:23:47.923629 1034381 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 13:23:47.923711 1034381 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 13:23:48.428381 1034381 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 13:23:49.043135 1034381 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 13:23:49.589613 1034381 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 13:23:49.826571 1034381 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 13:23:50.705661 1034381 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 13:23:50.706047 1034381 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-095355 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 13:23:51.072040 1034381 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 13:23:51.072598 1034381 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-095355 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 13:23:51.259384 1034381 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 13:23:51.747074 1034381 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 13:23:52.322354 1034381 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 13:23:52.322700 1034381 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 13:23:53.024316 1034381 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 13:23:53.157031 1034381 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 13:23:53.666027 1034381 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 13:23:54.645756 1034381 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 13:23:55.384974 1034381 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 13:23:55.385641 1034381 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 13:23:55.390611 1034381 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 13:23:55.393723 1034381 out.go:235]   - Booting up control plane ...
	I0923 13:23:55.393826 1034381 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 13:23:55.393912 1034381 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 13:23:55.393994 1034381 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 13:23:55.403663 1034381 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:23:55.409310 1034381 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:23:55.409548 1034381 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 13:23:55.509778 1034381 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 13:23:55.509911 1034381 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:23:57.510820 1034381 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001360705s
	I0923 13:23:57.510912 1034381 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 13:24:03.512869 1034381 kubeadm.go:310] [api-check] The API server is healthy after 6.001992491s
	I0923 13:24:03.539442 1034381 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 13:24:03.552771 1034381 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 13:24:03.589887 1034381 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 13:24:03.590088 1034381 kubeadm.go:310] [mark-control-plane] Marking the node addons-095355 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 13:24:03.603055 1034381 kubeadm.go:310] [bootstrap-token] Using token: bkqov9.e5jmrjbvu3hnt012
	I0923 13:24:03.605919 1034381 out.go:235]   - Configuring RBAC rules ...
	I0923 13:24:03.606062 1034381 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 13:24:03.616212 1034381 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 13:24:03.624822 1034381 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 13:24:03.628903 1034381 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 13:24:03.633034 1034381 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 13:24:03.639956 1034381 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 13:24:03.921995 1034381 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 13:24:04.346943 1034381 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 13:24:04.920253 1034381 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 13:24:04.921370 1034381 kubeadm.go:310] 
	I0923 13:24:04.921448 1034381 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 13:24:04.921458 1034381 kubeadm.go:310] 
	I0923 13:24:04.921535 1034381 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 13:24:04.921546 1034381 kubeadm.go:310] 
	I0923 13:24:04.921571 1034381 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 13:24:04.921634 1034381 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 13:24:04.921687 1034381 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 13:24:04.921696 1034381 kubeadm.go:310] 
	I0923 13:24:04.921748 1034381 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 13:24:04.921755 1034381 kubeadm.go:310] 
	I0923 13:24:04.921802 1034381 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 13:24:04.921812 1034381 kubeadm.go:310] 
	I0923 13:24:04.921869 1034381 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 13:24:04.921950 1034381 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 13:24:04.922023 1034381 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 13:24:04.922030 1034381 kubeadm.go:310] 
	I0923 13:24:04.922113 1034381 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 13:24:04.922194 1034381 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 13:24:04.922201 1034381 kubeadm.go:310] 
	I0923 13:24:04.922285 1034381 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bkqov9.e5jmrjbvu3hnt012 \
	I0923 13:24:04.922389 1034381 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:546a615c5f6c43989f11fb23800880d0ac626083bf950e2dfa20a24fb3b1d5bd \
	I0923 13:24:04.922415 1034381 kubeadm.go:310] 	--control-plane 
	I0923 13:24:04.922428 1034381 kubeadm.go:310] 
	I0923 13:24:04.922515 1034381 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 13:24:04.922524 1034381 kubeadm.go:310] 
	I0923 13:24:04.922604 1034381 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bkqov9.e5jmrjbvu3hnt012 \
	I0923 13:24:04.922709 1034381 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:546a615c5f6c43989f11fb23800880d0ac626083bf950e2dfa20a24fb3b1d5bd 
	I0923 13:24:04.926435 1034381 kubeadm.go:310] W0923 13:23:47.830751    1015 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:24:04.926736 1034381 kubeadm.go:310] W0923 13:23:47.831845    1015 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:24:04.926951 1034381 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 13:24:04.927061 1034381 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:24:04.927083 1034381 cni.go:84] Creating CNI manager for ""
	I0923 13:24:04.927094 1034381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 13:24:04.939808 1034381 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 13:24:04.944943 1034381 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 13:24:04.948570 1034381 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 13:24:04.948591 1034381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 13:24:04.966415 1034381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 13:24:05.247971 1034381 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 13:24:05.248114 1034381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:24:05.248197 1034381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-095355 minikube.k8s.io/updated_at=2024_09_23T13_24_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-095355 minikube.k8s.io/primary=true
	I0923 13:24:05.425669 1034381 ops.go:34] apiserver oom_adj: -16
	I0923 13:24:05.425782 1034381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:24:05.926309 1034381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:24:06.426505 1034381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:24:06.925901 1034381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:24:07.426768 1034381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:24:07.925887 1034381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:24:08.426063 1034381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:24:08.925974 1034381 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:24:09.017882 1034381 kubeadm.go:1113] duration metric: took 3.769827672s to wait for elevateKubeSystemPrivileges
	I0923 13:24:09.017915 1034381 kubeadm.go:394] duration metric: took 21.367063905s to StartCluster
	I0923 13:24:09.017933 1034381 settings.go:142] acquiring lock: {Name:mk31b92312dde44fbd825c77a82e5dececb66fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:24:09.018677 1034381 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 13:24:09.019072 1034381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/kubeconfig: {Name:mkd806df25aca780e43239d5b6c8b09e764ab897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:24:09.019286 1034381 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 13:24:09.019475 1034381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 13:24:09.019717 1034381 config.go:182] Loaded profile config "addons-095355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 13:24:09.019764 1034381 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 13:24:09.019844 1034381 addons.go:69] Setting yakd=true in profile "addons-095355"
	I0923 13:24:09.019859 1034381 addons.go:234] Setting addon yakd=true in "addons-095355"
	I0923 13:24:09.019883 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.020412 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.020987 1034381 addons.go:69] Setting metrics-server=true in profile "addons-095355"
	I0923 13:24:09.021012 1034381 addons.go:234] Setting addon metrics-server=true in "addons-095355"
	I0923 13:24:09.021037 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.021465 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.023068 1034381 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-095355"
	I0923 13:24:09.023163 1034381 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-095355"
	I0923 13:24:09.023264 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.024859 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.026495 1034381 addons.go:69] Setting cloud-spanner=true in profile "addons-095355"
	I0923 13:24:09.026530 1034381 addons.go:234] Setting addon cloud-spanner=true in "addons-095355"
	I0923 13:24:09.026564 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.027007 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.023418 1034381 addons.go:69] Setting registry=true in profile "addons-095355"
	I0923 13:24:09.028506 1034381 addons.go:234] Setting addon registry=true in "addons-095355"
	I0923 13:24:09.028544 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.028963 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.029988 1034381 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-095355"
	I0923 13:24:09.030040 1034381 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-095355"
	I0923 13:24:09.030069 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.030478 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.039742 1034381 addons.go:69] Setting default-storageclass=true in profile "addons-095355"
	I0923 13:24:09.039792 1034381 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-095355"
	I0923 13:24:09.040196 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.023429 1034381 addons.go:69] Setting storage-provisioner=true in profile "addons-095355"
	I0923 13:24:09.040384 1034381 addons.go:234] Setting addon storage-provisioner=true in "addons-095355"
	I0923 13:24:09.040446 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.040910 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.023433 1034381 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-095355"
	I0923 13:24:09.056693 1034381 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-095355"
	I0923 13:24:09.057035 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.059924 1034381 addons.go:69] Setting gcp-auth=true in profile "addons-095355"
	I0923 13:24:09.023437 1034381 addons.go:69] Setting volcano=true in profile "addons-095355"
	I0923 13:24:09.023441 1034381 addons.go:69] Setting volumesnapshots=true in profile "addons-095355"
	I0923 13:24:09.023477 1034381 out.go:177] * Verifying Kubernetes components...
	I0923 13:24:09.080525 1034381 addons.go:69] Setting ingress=true in profile "addons-095355"
	I0923 13:24:09.080553 1034381 addons.go:69] Setting ingress-dns=true in profile "addons-095355"
	I0923 13:24:09.080558 1034381 addons.go:69] Setting inspektor-gadget=true in profile "addons-095355"
	I0923 13:24:09.096865 1034381 addons.go:234] Setting addon inspektor-gadget=true in "addons-095355"
	I0923 13:24:09.096938 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.097467 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.099793 1034381 addons.go:234] Setting addon volcano=true in "addons-095355"
	I0923 13:24:09.100039 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.100812 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.140412 1034381 addons.go:234] Setting addon volumesnapshots=true in "addons-095355"
	I0923 13:24:09.140532 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.141123 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.169938 1034381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:24:09.170062 1034381 addons.go:234] Setting addon ingress=true in "addons-095355"
	I0923 13:24:09.170115 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.170709 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.194440 1034381 mustload.go:65] Loading cluster: addons-095355
	I0923 13:24:09.194650 1034381 config.go:182] Loaded profile config "addons-095355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 13:24:09.194905 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.196860 1034381 addons.go:234] Setting addon ingress-dns=true in "addons-095355"
	I0923 13:24:09.196918 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.197392 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.221069 1034381 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 13:24:09.227417 1034381 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 13:24:09.227444 1034381 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 13:24:09.227517 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.258645 1034381 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 13:24:09.261037 1034381 addons.go:234] Setting addon default-storageclass=true in "addons-095355"
	I0923 13:24:09.261092 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.261621 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.273909 1034381 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 13:24:09.287632 1034381 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 13:24:09.287659 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 13:24:09.287758 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.303617 1034381 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 13:24:09.310343 1034381 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:24:09.310782 1034381 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 13:24:09.321299 1034381 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 13:24:09.330093 1034381 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-095355"
	I0923 13:24:09.330144 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.330671 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:09.344431 1034381 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 13:24:09.344528 1034381 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 13:24:09.344698 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.348232 1034381 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:24:09.348260 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 13:24:09.348326 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.358881 1034381 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 13:24:09.358906 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 13:24:09.358987 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.363927 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:09.369286 1034381 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 13:24:09.369964 1034381 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 13:24:09.371417 1034381 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 13:24:09.394687 1034381 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 13:24:09.373244 1034381 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 13:24:09.396891 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.399538 1034381 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 13:24:09.399575 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 13:24:09.399654 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.431464 1034381 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 13:24:09.431984 1034381 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 13:24:09.432040 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 13:24:09.432167 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.436741 1034381 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 13:24:09.441752 1034381 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 13:24:09.442561 1034381 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:24:09.442768 1034381 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 13:24:09.451504 1034381 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 13:24:09.451634 1034381 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 13:24:09.451681 1034381 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 13:24:09.451738 1034381 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:24:09.451760 1034381 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 13:24:09.454471 1034381 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 13:24:09.454551 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.459235 1034381 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 13:24:09.460029 1034381 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 13:24:09.460111 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.476388 1034381 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 13:24:09.476412 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 13:24:09.476474 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.490156 1034381 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 13:24:09.492778 1034381 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 13:24:09.492803 1034381 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 13:24:09.492944 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.497539 1034381 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 13:24:09.500240 1034381 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 13:24:09.503955 1034381 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 13:24:09.503979 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 13:24:09.504045 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.529697 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.539640 1034381 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 13:24:09.539660 1034381 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 13:24:09.539718 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.558665 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.572821 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.599722 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.612584 1034381 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 13:24:09.634921 1034381 out.go:177]   - Using image docker.io/busybox:stable
	I0923 13:24:09.636308 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.639119 1034381 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 13:24:09.639140 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 13:24:09.639202 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:09.685800 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.691500 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.702391 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.704301 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.707523 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.708502 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.735497 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.737342 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:09.890929 1034381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 13:24:09.891066 1034381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:24:10.205374 1034381 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 13:24:10.205451 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 13:24:10.246808 1034381 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 13:24:10.246836 1034381 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 13:24:10.271205 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 13:24:10.319913 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 13:24:10.367124 1034381 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 13:24:10.367200 1034381 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 13:24:10.455758 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 13:24:10.473760 1034381 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 13:24:10.473828 1034381 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 13:24:10.566238 1034381 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 13:24:10.566322 1034381 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 13:24:10.595199 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 13:24:10.598651 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:24:10.646659 1034381 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 13:24:10.646736 1034381 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 13:24:10.697277 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 13:24:10.704662 1034381 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 13:24:10.704735 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 13:24:10.771981 1034381 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 13:24:10.772057 1034381 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 13:24:10.810927 1034381 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 13:24:10.811002 1034381 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 13:24:10.860763 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 13:24:10.863111 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 13:24:10.926015 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 13:24:10.955230 1034381 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 13:24:10.955301 1034381 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 13:24:10.962195 1034381 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 13:24:10.962260 1034381 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 13:24:11.003018 1034381 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 13:24:11.003111 1034381 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 13:24:11.056436 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 13:24:11.078072 1034381 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 13:24:11.078155 1034381 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 13:24:11.245910 1034381 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 13:24:11.245975 1034381 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 13:24:11.282566 1034381 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 13:24:11.282643 1034381 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 13:24:11.324378 1034381 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 13:24:11.324456 1034381 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 13:24:11.345760 1034381 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 13:24:11.345833 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 13:24:11.509761 1034381 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 13:24:11.509839 1034381 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 13:24:11.560396 1034381 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 13:24:11.560473 1034381 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 13:24:11.687434 1034381 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 13:24:11.687513 1034381 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 13:24:11.773934 1034381 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.882837637s)
	I0923 13:24:11.774013 1034381 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.883055125s)
	I0923 13:24:11.774105 1034381 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 13:24:11.774965 1034381 node_ready.go:35] waiting up to 6m0s for node "addons-095355" to be "Ready" ...
	I0923 13:24:11.781305 1034381 node_ready.go:49] node "addons-095355" has status "Ready":"True"
	I0923 13:24:11.781387 1034381 node_ready.go:38] duration metric: took 6.367258ms for node "addons-095355" to be "Ready" ...
	I0923 13:24:11.781412 1034381 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:24:11.800817 1034381 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fzrfm" in "kube-system" namespace to be "Ready" ...
	I0923 13:24:11.857577 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 13:24:11.879398 1034381 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 13:24:11.879473 1034381 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 13:24:11.895678 1034381 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:24:11.895744 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 13:24:12.000664 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:24:12.083468 1034381 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 13:24:12.083538 1034381 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 13:24:12.239524 1034381 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 13:24:12.239598 1034381 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 13:24:12.285277 1034381 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-095355" context rescaled to 1 replicas
	I0923 13:24:12.479235 1034381 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 13:24:12.479307 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 13:24:12.613638 1034381 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 13:24:12.613708 1034381 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 13:24:12.804174 1034381 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-fzrfm" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-fzrfm" not found
	I0923 13:24:12.804207 1034381 pod_ready.go:82] duration metric: took 1.003299315s for pod "coredns-7c65d6cfc9-fzrfm" in "kube-system" namespace to be "Ready" ...
	E0923 13:24:12.804219 1034381 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-fzrfm" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-fzrfm" not found
	I0923 13:24:12.804227 1034381 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace to be "Ready" ...
	I0923 13:24:12.844392 1034381 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 13:24:12.844428 1034381 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 13:24:13.039152 1034381 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 13:24:13.039191 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 13:24:13.159681 1034381 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 13:24:13.159709 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 13:24:13.268698 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 13:24:13.359180 1034381 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 13:24:13.359208 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 13:24:13.821381 1034381 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 13:24:13.821413 1034381 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 13:24:14.061259 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 13:24:14.835697 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:15.544032 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.272779546s)
	I0923 13:24:16.760452 1034381 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 13:24:16.760577 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:16.785622 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:17.179677 1034381 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 13:24:17.232995 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.912985342s)
	I0923 13:24:17.233069 1034381 addons.go:475] Verifying addon ingress=true in "addons-095355"
	I0923 13:24:17.234382 1034381 out.go:177] * Verifying ingress addon...
	I0923 13:24:17.236914 1034381 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 13:24:17.244701 1034381 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 13:24:17.244723 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:17.264381 1034381 addons.go:234] Setting addon gcp-auth=true in "addons-095355"
	I0923 13:24:17.264437 1034381 host.go:66] Checking if "addons-095355" exists ...
	I0923 13:24:17.264896 1034381 cli_runner.go:164] Run: docker container inspect addons-095355 --format={{.State.Status}}
	I0923 13:24:17.286316 1034381 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 13:24:17.286364 1034381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-095355
	I0923 13:24:17.312468 1034381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41452 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/addons-095355/id_rsa Username:docker}
	I0923 13:24:17.319025 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:17.743508 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:18.267994 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:18.836694 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:19.262487 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:19.319390 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:19.477232 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.021387653s)
	I0923 13:24:19.477349 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.882081801s)
	I0923 13:24:19.477795 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.62015166s)
	I0923 13:24:19.478106 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.477354129s)
	W0923 13:24:19.478407 1034381 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 13:24:19.478426 1034381 retry.go:31] will retry after 228.097305ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 13:24:19.477456 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.780094781s)
	I0923 13:24:19.477528 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.616689551s)
	I0923 13:24:19.478461 1034381 addons.go:475] Verifying addon metrics-server=true in "addons-095355"
	I0923 13:24:19.477562 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.614386289s)
	I0923 13:24:19.478473 1034381 addons.go:475] Verifying addon registry=true in "addons-095355"
	I0923 13:24:19.477597 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.551506232s)
	I0923 13:24:19.477624 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.421114768s)
	I0923 13:24:19.478173 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.209436912s)
	I0923 13:24:19.477430 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.878716272s)
	I0923 13:24:19.480730 1034381 out.go:177] * Verifying registry addon...
	I0923 13:24:19.480867 1034381 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-095355 service yakd-dashboard -n yakd-dashboard
	
	I0923 13:24:19.483710 1034381 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 13:24:19.571950 1034381 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 13:24:19.572018 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:19.707396 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:24:19.783111 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:19.992497 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:20.252509 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:20.264329 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.203019603s)
	I0923 13:24:20.264403 1034381 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-095355"
	I0923 13:24:20.264567 1034381 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.978230484s)
	I0923 13:24:20.273761 1034381 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 13:24:20.273886 1034381 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:24:20.277798 1034381 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 13:24:20.280713 1034381 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 13:24:20.282713 1034381 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 13:24:20.282771 1034381 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 13:24:20.320758 1034381 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 13:24:20.320827 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:20.375046 1034381 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 13:24:20.375123 1034381 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 13:24:20.430121 1034381 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 13:24:20.430188 1034381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 13:24:20.452061 1034381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 13:24:20.487414 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:20.755040 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:20.845974 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:20.987412 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:21.241667 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:21.283648 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:21.485964 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.778478911s)
	I0923 13:24:21.504316 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:21.507621 1034381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.055482055s)
	I0923 13:24:21.511108 1034381 addons.go:475] Verifying addon gcp-auth=true in "addons-095355"
	I0923 13:24:21.514114 1034381 out.go:177] * Verifying gcp-auth addon...
	I0923 13:24:21.517382 1034381 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 13:24:21.600896 1034381 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 13:24:21.741526 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:21.783285 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:21.810069 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:21.988653 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:22.241844 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:22.282542 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:22.488380 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:22.742363 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:22.783292 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:22.988121 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:23.242530 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:23.283559 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:23.488142 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:23.742807 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:23.783245 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:23.811483 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:23.992125 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:24.249042 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:24.285903 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:24.488337 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:24.741472 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:24.785068 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:24.988371 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:25.242970 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:25.283474 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:25.488059 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:25.742369 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:25.783313 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:25.813499 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:25.988647 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:26.241853 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:26.282026 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:26.492026 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:26.741534 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:26.784818 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:26.989424 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:27.240863 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:27.283846 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:27.487496 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:27.741689 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:27.783140 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:27.988462 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:28.242177 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:28.283086 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:28.316974 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:28.493374 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:28.745488 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:28.783753 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:28.987930 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:29.241752 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:29.283471 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:29.489409 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:29.741385 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:29.783471 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:29.987803 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:30.241865 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:30.282756 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:30.493610 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:30.742725 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:30.782139 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:30.809881 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:30.987428 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:31.241220 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:31.282885 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:31.487775 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:31.741049 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:31.782493 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:31.987886 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:32.241099 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:32.282587 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:32.488311 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:32.741310 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:32.783524 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:32.811502 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:32.988568 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:33.242088 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:33.282276 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:33.487895 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:33.741742 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:33.782097 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:33.987543 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:34.241538 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:34.283537 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:34.487785 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:34.740862 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:34.782652 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:34.987974 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:35.241286 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:35.282600 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:35.311154 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:35.487821 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:35.741920 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:35.782567 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:35.987810 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:36.241515 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:36.282902 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:36.491453 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:36.741625 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:36.782728 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:36.987935 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:37.241408 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:37.282719 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:37.486946 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:37.741631 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:37.783428 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:37.815811 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:37.988610 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:38.240818 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:38.282406 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:38.488407 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:38.741493 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:38.782697 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:38.989609 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:39.241672 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:39.282433 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:39.487626 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:39.741696 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:39.782245 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:39.987448 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:40.241110 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:40.282865 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:40.310201 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:40.487456 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:40.741235 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:40.782650 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:40.987716 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:41.241939 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:41.282518 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:41.487952 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:41.740962 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:41.782332 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:41.988009 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:42.241131 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:42.283156 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:42.311259 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:42.488492 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:42.742104 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:42.782439 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:42.987574 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:43.242441 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:43.284167 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:43.487734 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:43.745973 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:43.782285 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:43.991679 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:44.241148 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:44.283132 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:44.313779 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:44.488630 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:44.741486 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:44.783604 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:44.987959 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:45.243498 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:45.291272 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:45.490216 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:45.741567 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:45.783141 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:45.988157 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:46.241383 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:46.282742 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:46.487735 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:46.742353 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:46.783314 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:46.809959 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:46.987688 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:47.241595 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:47.283160 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:47.487585 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:47.741730 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:47.782920 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:47.988568 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:48.241628 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:48.282869 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:48.488313 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:48.742324 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:48.782743 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:48.810398 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:48.987850 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:49.241493 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:49.282935 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:49.488360 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:49.742185 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:49.782215 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:49.988324 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:50.241721 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:50.283029 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:50.487819 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:50.741140 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:50.782628 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:50.987534 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:51.241594 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:51.282824 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:51.311815 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:51.487842 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:51.741409 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:51.782983 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:51.989927 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:52.241670 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:52.283660 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:52.487997 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:52.741335 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:52.783407 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:52.988220 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:53.245867 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:53.314641 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:53.344079 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:53.488322 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:53.742823 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:53.783237 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:53.987753 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:54.242319 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:54.283395 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:54.489284 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:54.744390 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:54.783003 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:54.987350 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:55.241730 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:55.283514 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:55.315815 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:55.488625 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:55.741829 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:55.783284 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:55.988687 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:56.241233 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:56.282441 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:56.488377 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:56.752375 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:56.782538 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:56.987402 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:57.241678 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:57.282946 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:57.487776 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:57.741519 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:57.782716 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:57.811039 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:24:57.987166 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:58.241267 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:58.282731 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:58.488750 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:58.741154 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:58.782402 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:58.987717 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:59.241966 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:59.282901 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:59.488213 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:24:59.742457 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:24:59.783254 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:24:59.987659 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:00.248946 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:00.287661 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:00.314962 1034381 pod_ready.go:103] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:25:00.487927 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:00.746983 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:00.848712 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:00.987417 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:01.242264 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:01.284038 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:01.490737 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:01.743995 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:01.787826 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:01.814016 1034381 pod_ready.go:93] pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace has status "Ready":"True"
	I0923 13:25:01.814058 1034381 pod_ready.go:82] duration metric: took 49.009823744s for pod "coredns-7c65d6cfc9-vhmjq" in "kube-system" namespace to be "Ready" ...
	I0923 13:25:01.814071 1034381 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-095355" in "kube-system" namespace to be "Ready" ...
	I0923 13:25:01.821875 1034381 pod_ready.go:93] pod "etcd-addons-095355" in "kube-system" namespace has status "Ready":"True"
	I0923 13:25:01.821976 1034381 pod_ready.go:82] duration metric: took 7.895138ms for pod "etcd-addons-095355" in "kube-system" namespace to be "Ready" ...
	I0923 13:25:01.822015 1034381 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-095355" in "kube-system" namespace to be "Ready" ...
	I0923 13:25:01.831608 1034381 pod_ready.go:93] pod "kube-apiserver-addons-095355" in "kube-system" namespace has status "Ready":"True"
	I0923 13:25:01.831695 1034381 pod_ready.go:82] duration metric: took 9.624663ms for pod "kube-apiserver-addons-095355" in "kube-system" namespace to be "Ready" ...
	I0923 13:25:01.831725 1034381 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-095355" in "kube-system" namespace to be "Ready" ...
	I0923 13:25:01.838583 1034381 pod_ready.go:93] pod "kube-controller-manager-addons-095355" in "kube-system" namespace has status "Ready":"True"
	I0923 13:25:01.838666 1034381 pod_ready.go:82] duration metric: took 6.903779ms for pod "kube-controller-manager-addons-095355" in "kube-system" namespace to be "Ready" ...
	I0923 13:25:01.838694 1034381 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7km75" in "kube-system" namespace to be "Ready" ...
	I0923 13:25:01.846228 1034381 pod_ready.go:93] pod "kube-proxy-7km75" in "kube-system" namespace has status "Ready":"True"
	I0923 13:25:01.846317 1034381 pod_ready.go:82] duration metric: took 7.598941ms for pod "kube-proxy-7km75" in "kube-system" namespace to be "Ready" ...
	I0923 13:25:01.846353 1034381 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-095355" in "kube-system" namespace to be "Ready" ...
	I0923 13:25:01.989053 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:02.208984 1034381 pod_ready.go:93] pod "kube-scheduler-addons-095355" in "kube-system" namespace has status "Ready":"True"
	I0923 13:25:02.209064 1034381 pod_ready.go:82] duration metric: took 362.65618ms for pod "kube-scheduler-addons-095355" in "kube-system" namespace to be "Ready" ...
	I0923 13:25:02.209089 1034381 pod_ready.go:39] duration metric: took 50.427650315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:25:02.209135 1034381 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:25:02.209222 1034381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:25:02.228206 1034381 api_server.go:72] duration metric: took 53.208882715s to wait for apiserver process to appear ...
	I0923 13:25:02.228278 1034381 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:25:02.228316 1034381 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:25:02.236480 1034381 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 13:25:02.237794 1034381 api_server.go:141] control plane version: v1.31.1
	I0923 13:25:02.237857 1034381 api_server.go:131] duration metric: took 9.557243ms to wait for apiserver health ...
	I0923 13:25:02.237880 1034381 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:25:02.241532 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:02.283383 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:02.416366 1034381 system_pods.go:59] 18 kube-system pods found
	I0923 13:25:02.416407 1034381 system_pods.go:61] "coredns-7c65d6cfc9-vhmjq" [defe7bbf-a320-4288-bf2c-7fc32e0d8fb5] Running
	I0923 13:25:02.416417 1034381 system_pods.go:61] "csi-hostpath-attacher-0" [45205548-f61c-4938-908d-ccba01cb4c59] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 13:25:02.416425 1034381 system_pods.go:61] "csi-hostpath-resizer-0" [48eb626a-a9d1-4f35-be27-94604c394735] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 13:25:02.416433 1034381 system_pods.go:61] "csi-hostpathplugin-7272k" [9f6cd053-37aa-4f49-9dc3-d7d77b873095] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 13:25:02.416438 1034381 system_pods.go:61] "etcd-addons-095355" [09751d70-a968-45f8-9db5-cd715e6bbb2f] Running
	I0923 13:25:02.416445 1034381 system_pods.go:61] "kindnet-h4w8r" [8ae5e532-98cd-45b1-8e74-a70026004770] Running
	I0923 13:25:02.416450 1034381 system_pods.go:61] "kube-apiserver-addons-095355" [569ae133-092f-4e8a-adc3-50dd6ea0f76f] Running
	I0923 13:25:02.416454 1034381 system_pods.go:61] "kube-controller-manager-addons-095355" [3acc0136-a04c-4a19-a8ce-7ad75b217481] Running
	I0923 13:25:02.416460 1034381 system_pods.go:61] "kube-ingress-dns-minikube" [5aa0780c-43e0-41b7-b30b-9e280133ce13] Running
	I0923 13:25:02.416466 1034381 system_pods.go:61] "kube-proxy-7km75" [d2102b37-108d-488e-a181-e08fa0570124] Running
	I0923 13:25:02.416477 1034381 system_pods.go:61] "kube-scheduler-addons-095355" [ae193592-ca16-406a-af39-c6610c5a1913] Running
	I0923 13:25:02.416484 1034381 system_pods.go:61] "metrics-server-84c5f94fbc-8qvf4" [257c8283-1a4c-40b3-bfc8-621bf39df1e3] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 13:25:02.416497 1034381 system_pods.go:61] "nvidia-device-plugin-daemonset-mm7dj" [8ef39fa4-b9a6-4677-a1c2-02424564ea03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 13:25:02.416505 1034381 system_pods.go:61] "registry-66c9cd494c-k2d2s" [08ec5d70-1841-4275-80d9-904261052f24] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 13:25:02.416513 1034381 system_pods.go:61] "registry-proxy-mg8qm" [25545c68-e953-40fe-b4be-67ff7a5e0e3d] Running
	I0923 13:25:02.416521 1034381 system_pods.go:61] "snapshot-controller-56fcc65765-4wq9p" [3c317bcb-dc38-476f-9955-dcbb25bf54b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 13:25:02.416529 1034381 system_pods.go:61] "snapshot-controller-56fcc65765-spflc" [748117ef-3f9b-40e4-97f6-8b81d0092ac7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 13:25:02.416536 1034381 system_pods.go:61] "storage-provisioner" [8ca5a712-c9cc-46b1-9a32-002e31388ba5] Running
	I0923 13:25:02.416542 1034381 system_pods.go:74] duration metric: took 178.64316ms to wait for pod list to return data ...
	I0923 13:25:02.416554 1034381 default_sa.go:34] waiting for default service account to be created ...
	I0923 13:25:02.488278 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:02.609068 1034381 default_sa.go:45] found service account: "default"
	I0923 13:25:02.609095 1034381 default_sa.go:55] duration metric: took 192.533907ms for default service account to be created ...
	I0923 13:25:02.609112 1034381 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 13:25:02.741894 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:02.782225 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:02.818571 1034381 system_pods.go:86] 18 kube-system pods found
	I0923 13:25:02.818610 1034381 system_pods.go:89] "coredns-7c65d6cfc9-vhmjq" [defe7bbf-a320-4288-bf2c-7fc32e0d8fb5] Running
	I0923 13:25:02.818621 1034381 system_pods.go:89] "csi-hostpath-attacher-0" [45205548-f61c-4938-908d-ccba01cb4c59] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 13:25:02.818655 1034381 system_pods.go:89] "csi-hostpath-resizer-0" [48eb626a-a9d1-4f35-be27-94604c394735] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 13:25:02.818674 1034381 system_pods.go:89] "csi-hostpathplugin-7272k" [9f6cd053-37aa-4f49-9dc3-d7d77b873095] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 13:25:02.818681 1034381 system_pods.go:89] "etcd-addons-095355" [09751d70-a968-45f8-9db5-cd715e6bbb2f] Running
	I0923 13:25:02.818690 1034381 system_pods.go:89] "kindnet-h4w8r" [8ae5e532-98cd-45b1-8e74-a70026004770] Running
	I0923 13:25:02.818696 1034381 system_pods.go:89] "kube-apiserver-addons-095355" [569ae133-092f-4e8a-adc3-50dd6ea0f76f] Running
	I0923 13:25:02.818701 1034381 system_pods.go:89] "kube-controller-manager-addons-095355" [3acc0136-a04c-4a19-a8ce-7ad75b217481] Running
	I0923 13:25:02.818710 1034381 system_pods.go:89] "kube-ingress-dns-minikube" [5aa0780c-43e0-41b7-b30b-9e280133ce13] Running
	I0923 13:25:02.818714 1034381 system_pods.go:89] "kube-proxy-7km75" [d2102b37-108d-488e-a181-e08fa0570124] Running
	I0923 13:25:02.818743 1034381 system_pods.go:89] "kube-scheduler-addons-095355" [ae193592-ca16-406a-af39-c6610c5a1913] Running
	I0923 13:25:02.818764 1034381 system_pods.go:89] "metrics-server-84c5f94fbc-8qvf4" [257c8283-1a4c-40b3-bfc8-621bf39df1e3] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 13:25:02.818779 1034381 system_pods.go:89] "nvidia-device-plugin-daemonset-mm7dj" [8ef39fa4-b9a6-4677-a1c2-02424564ea03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 13:25:02.818786 1034381 system_pods.go:89] "registry-66c9cd494c-k2d2s" [08ec5d70-1841-4275-80d9-904261052f24] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 13:25:02.818794 1034381 system_pods.go:89] "registry-proxy-mg8qm" [25545c68-e953-40fe-b4be-67ff7a5e0e3d] Running
	I0923 13:25:02.818801 1034381 system_pods.go:89] "snapshot-controller-56fcc65765-4wq9p" [3c317bcb-dc38-476f-9955-dcbb25bf54b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 13:25:02.818810 1034381 system_pods.go:89] "snapshot-controller-56fcc65765-spflc" [748117ef-3f9b-40e4-97f6-8b81d0092ac7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 13:25:02.818821 1034381 system_pods.go:89] "storage-provisioner" [8ca5a712-c9cc-46b1-9a32-002e31388ba5] Running
	I0923 13:25:02.818842 1034381 system_pods.go:126] duration metric: took 209.722106ms to wait for k8s-apps to be running ...
	I0923 13:25:02.818856 1034381 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:25:02.818925 1034381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:25:02.831484 1034381 system_svc.go:56] duration metric: took 12.619082ms WaitForService to wait for kubelet
	I0923 13:25:02.831513 1034381 kubeadm.go:582] duration metric: took 53.812194933s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:25:02.831545 1034381 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:25:02.988167 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:03.012860 1034381 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 13:25:03.012905 1034381 node_conditions.go:123] node cpu capacity is 2
	I0923 13:25:03.012920 1034381 node_conditions.go:105] duration metric: took 181.368688ms to run NodePressure ...
	I0923 13:25:03.012933 1034381 start.go:241] waiting for startup goroutines ...
	I0923 13:25:03.242495 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:03.283545 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:03.500160 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:03.742275 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:03.783951 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:03.988392 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:04.241908 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:04.282399 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:04.488554 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:04.744200 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:04.787070 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:04.988287 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:05.242001 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:05.282265 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:05.487574 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:05.743237 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:05.782511 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:05.987368 1034381 kapi.go:107] duration metric: took 46.503616724s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 13:25:06.241803 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:06.282111 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:06.741309 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:06.782677 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:07.242534 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:07.283926 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:07.741497 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:07.783168 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:08.242681 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:08.284809 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:08.741331 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:08.783793 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:09.243181 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:09.282722 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:09.741208 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:09.782782 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:10.242926 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:10.282183 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:10.740836 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:10.782337 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:11.241462 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:11.342721 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:11.741786 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:11.782533 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:12.243159 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:12.283095 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:12.742921 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:12.843539 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:13.242404 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:13.344492 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:13.741392 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:13.782764 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:14.242414 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:14.295471 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:14.741244 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:14.783074 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:15.241606 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:15.283521 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:15.742920 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:15.782600 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:16.241823 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:16.282702 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:16.742756 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:16.782391 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:17.243023 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:17.282275 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:17.757530 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:17.783209 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:18.242417 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:18.282721 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:18.743767 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:18.783904 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:19.242146 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:19.283049 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:19.740723 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:19.783450 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:20.241861 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:20.282239 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:20.742382 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:20.842904 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:21.242210 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:21.283414 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:21.742522 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:21.850894 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:22.242012 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:22.283227 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:22.742522 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:22.783611 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:23.242245 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:23.283535 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:23.744886 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:23.782818 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:24.243600 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:24.284663 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:24.743233 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:24.814085 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:25.241702 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:25.282603 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:25.741472 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:25.783981 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:26.241516 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:26.285818 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:26.740898 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:26.782762 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:27.242128 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:27.282672 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:27.741829 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:27.783061 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:28.242910 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:28.342295 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:28.741172 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:28.783454 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:29.243734 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:29.283019 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:29.742064 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:29.782231 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:30.241836 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:30.285843 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:30.745407 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:30.783318 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:31.241125 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:31.282878 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:31.741545 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:31.782867 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:32.241788 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:32.282910 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:32.741972 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:32.783219 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:33.241629 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:33.283664 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:33.741804 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:33.783360 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:34.242190 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:34.343858 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:34.742356 1034381 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:34.783810 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:35.242706 1034381 kapi.go:107] duration metric: took 1m18.005787006s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 13:25:35.284055 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:35.783760 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:36.282955 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:36.783301 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:37.291067 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:37.782970 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:38.283740 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:38.783732 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:39.283263 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:39.782722 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:40.282349 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:40.783632 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:41.283083 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:41.782480 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:42.283007 1034381 kapi.go:107] duration metric: took 1m22.005209967s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 13:27:06.521902 1034381 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 13:27:06.521928 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:07.021594 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:07.521979 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:08.021240 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:08.521265 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:09.022181 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:09.521945 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:10.021525 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:10.521820 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:11.021009 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:11.520646 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:12.021542 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:12.520872 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:13.021285 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:13.521299 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:14.021060 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:14.521046 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:15.021729 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:15.521934 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:16.021367 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:16.521382 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:17.020997 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:17.521028 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:18.021266 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:18.520591 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:19.021564 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:19.522193 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:20.020901 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:20.520582 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:21.022701 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:21.521407 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:22.020912 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:22.520944 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:23.021254 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:23.521471 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:24.021542 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:24.525983 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:25.021741 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:25.522013 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:26.021894 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:26.522421 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:27.020954 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:27.522094 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:28.021897 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:28.521249 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:29.023821 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:29.520839 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:30.026413 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:30.520906 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:31.022048 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:31.521882 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:32.021135 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:32.521167 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:33.022062 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:33.521214 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:34.021585 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:34.521170 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:35.021642 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:35.521626 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:36.021575 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:36.522176 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:37.021877 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:37.521070 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:38.021493 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:38.521010 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:39.021015 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:39.521083 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:40.022840 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:40.521168 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:41.022094 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:41.521584 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:42.022216 1034381 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:42.521552 1034381 kapi.go:107] duration metric: took 3m21.004167365s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 13:27:42.523559 1034381 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-095355 cluster.
	I0923 13:27:42.526340 1034381 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 13:27:42.527807 1034381 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 13:27:42.529720 1034381 out.go:177] * Enabled addons: storage-provisioner-rancher, volcano, nvidia-device-plugin, metrics-server, cloud-spanner, ingress-dns, inspektor-gadget, storage-provisioner, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0923 13:27:42.531012 1034381 addons.go:510] duration metric: took 3m33.511244076s for enable addons: enabled=[storage-provisioner-rancher volcano nvidia-device-plugin metrics-server cloud-spanner ingress-dns inspektor-gadget storage-provisioner yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0923 13:27:42.531064 1034381 start.go:246] waiting for cluster config update ...
	I0923 13:27:42.531088 1034381 start.go:255] writing updated cluster config ...
	I0923 13:27:42.531429 1034381 ssh_runner.go:195] Run: rm -f paused
	I0923 13:27:42.936224 1034381 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 13:27:42.938740 1034381 out.go:177] * Done! kubectl is now configured to use "addons-095355" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	cab4b74a94b7c       4f725bf50aaa5       23 seconds ago      Exited              gadget                                   6                   be83019867fe4       gadget-4bwtr
	4b74cc01f8b60       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   ae137f61fe38d       gcp-auth-89d5ffd79-vtqpk
	c20a91f1ea60a       8b46b1cd48760       4 minutes ago       Running             admission                                0                   e9a6e72e416c9       volcano-admission-77d7d48b68-fs748
	740aacb0d6e78       d9c7ad4c226bf       4 minutes ago       Running             volcano-scheduler                        1                   dddc2cd2538fb       volcano-scheduler-576bc46687-h8fw7
	0fd6977c8c7d0       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   d23776c3f57ee       csi-hostpathplugin-7272k
	39b0122552a05       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   d23776c3f57ee       csi-hostpathplugin-7272k
	a0e6b6b7600d7       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   d23776c3f57ee       csi-hostpathplugin-7272k
	d879a856e92f1       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   d23776c3f57ee       csi-hostpathplugin-7272k
	bf8282174552f       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   d23776c3f57ee       csi-hostpathplugin-7272k
	1be8dc7563402       289a818c8d9c5       5 minutes ago       Running             controller                               0                   cb0fa2c2cecd4       ingress-nginx-controller-bc57996ff-tp5j6
	08f1e7e0a67ae       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   d23776c3f57ee       csi-hostpathplugin-7272k
	4b314ca165b29       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   3ccdf6cea2386       csi-hostpath-resizer-0
	7c2455b0eb034       d9c7ad4c226bf       5 minutes ago       Exited              volcano-scheduler                        0                   dddc2cd2538fb       volcano-scheduler-576bc46687-h8fw7
	e063a79ec8000       420193b27261a       5 minutes ago       Exited              patch                                    0                   ba7ffef7e63bf       ingress-nginx-admission-patch-nf8lt
	104d835850832       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   2144fdbf4b65a       volcano-controllers-56675bb4d5-zc2sm
	84460ab06d823       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   3e8aa0563567a       local-path-provisioner-86d989889c-smkk7
	2320098a51583       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   9ad95ffb429d0       csi-hostpath-attacher-0
	39591d5e809d6       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   cbfa24a80d92d       snapshot-controller-56fcc65765-4wq9p
	087ea97eaddb1       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   7756e6cdf351b       cloud-spanner-emulator-5b584cc74-8svqk
	1ffbc702ceed3       420193b27261a       5 minutes ago       Exited              create                                   0                   9bcfae2d87b56       ingress-nginx-admission-create-mld5x
	836275fa370e1       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   71b34ace518b6       nvidia-device-plugin-daemonset-mm7dj
	3f65d21dc0ad4       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   43f9db471cfe2       registry-66c9cd494c-k2d2s
	89630f7353b99       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   26b3a8cb34eab       snapshot-controller-56fcc65765-spflc
	87f7fcbfa6cea       77bdba588b953       5 minutes ago       Running             yakd                                     0                   9917373086c2b       yakd-dashboard-67d98fc6b-w88np
	117874540153c       2f6c962e7b831       6 minutes ago       Running             coredns                                  0                   2522a0fe4cd66       coredns-7c65d6cfc9-vhmjq
	e0b58bbd97d90       3410e1561990a       6 minutes ago       Running             registry-proxy                           0                   f3df7d7b6676c       registry-proxy-mg8qm
	eee1699644045       5548a49bb60ba       6 minutes ago       Running             metrics-server                           0                   5d5f9865967ec       metrics-server-84c5f94fbc-8qvf4
	882b583d1ec1a       35508c2f890c4       6 minutes ago       Running             minikube-ingress-dns                     0                   747029b9cf2c9       kube-ingress-dns-minikube
	aeab809eb8a9f       ba04bb24b9575       6 minutes ago       Running             storage-provisioner                      0                   e20b3709785ea       storage-provisioner
	9577676abdd22       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   71c11455dc26c       kube-proxy-7km75
	453ac10d8e637       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   c4c0b78fc53c9       kindnet-h4w8r
	cb5612e2e80c0       7f8aa378bb47d       7 minutes ago       Running             kube-scheduler                           0                   ea8501ad69e4f       kube-scheduler-addons-095355
	79e433b0ee7e5       27e3830e14027       7 minutes ago       Running             etcd                                     0                   2ca641e3a1973       etcd-addons-095355
	e5fd1db59e309       d3f53a98c0a9d       7 minutes ago       Running             kube-apiserver                           0                   b809bfbe22d85       kube-apiserver-addons-095355
	f0254e1e34178       279f381cb3736       7 minutes ago       Running             kube-controller-manager                  0                   4a02a31045ba7       kube-controller-manager-addons-095355
	
	
	==> containerd <==
	Sep 23 13:28:04 addons-095355 containerd[814]: time="2024-09-23T13:28:04.346857691Z" level=info msg="StopPodSandbox for \"b02280aff7414550c0b8dec2d1533bdfc21361803c83d02e9c89c16053971b85\""
	Sep 23 13:28:04 addons-095355 containerd[814]: time="2024-09-23T13:28:04.355273317Z" level=info msg="TearDown network for sandbox \"b02280aff7414550c0b8dec2d1533bdfc21361803c83d02e9c89c16053971b85\" successfully"
	Sep 23 13:28:04 addons-095355 containerd[814]: time="2024-09-23T13:28:04.355312004Z" level=info msg="StopPodSandbox for \"b02280aff7414550c0b8dec2d1533bdfc21361803c83d02e9c89c16053971b85\" returns successfully"
	Sep 23 13:28:04 addons-095355 containerd[814]: time="2024-09-23T13:28:04.355914056Z" level=info msg="RemovePodSandbox for \"b02280aff7414550c0b8dec2d1533bdfc21361803c83d02e9c89c16053971b85\""
	Sep 23 13:28:04 addons-095355 containerd[814]: time="2024-09-23T13:28:04.355957181Z" level=info msg="Forcibly stopping sandbox \"b02280aff7414550c0b8dec2d1533bdfc21361803c83d02e9c89c16053971b85\""
	Sep 23 13:28:04 addons-095355 containerd[814]: time="2024-09-23T13:28:04.364189649Z" level=info msg="TearDown network for sandbox \"b02280aff7414550c0b8dec2d1533bdfc21361803c83d02e9c89c16053971b85\" successfully"
	Sep 23 13:28:04 addons-095355 containerd[814]: time="2024-09-23T13:28:04.371579750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b02280aff7414550c0b8dec2d1533bdfc21361803c83d02e9c89c16053971b85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 23 13:28:04 addons-095355 containerd[814]: time="2024-09-23T13:28:04.371700387Z" level=info msg="RemovePodSandbox \"b02280aff7414550c0b8dec2d1533bdfc21361803c83d02e9c89c16053971b85\" returns successfully"
	Sep 23 13:30:38 addons-095355 containerd[814]: time="2024-09-23T13:30:38.290314014Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 23 13:30:38 addons-095355 containerd[814]: time="2024-09-23T13:30:38.440614923Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 23 13:30:38 addons-095355 containerd[814]: time="2024-09-23T13:30:38.442342880Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 23 13:30:38 addons-095355 containerd[814]: time="2024-09-23T13:30:38.446067202Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 155.698354ms"
	Sep 23 13:30:38 addons-095355 containerd[814]: time="2024-09-23T13:30:38.446117547Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 23 13:30:38 addons-095355 containerd[814]: time="2024-09-23T13:30:38.448368601Z" level=info msg="CreateContainer within sandbox \"be83019867fe4d0d3c927af72e67b4ad9657d3cf9991f53af5ab40ea2414db99\" for container &ContainerMetadata{Name:gadget,Attempt:6,}"
	Sep 23 13:30:38 addons-095355 containerd[814]: time="2024-09-23T13:30:38.466985786Z" level=info msg="CreateContainer within sandbox \"be83019867fe4d0d3c927af72e67b4ad9657d3cf9991f53af5ab40ea2414db99\" for &ContainerMetadata{Name:gadget,Attempt:6,} returns container id \"cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83\""
	Sep 23 13:30:38 addons-095355 containerd[814]: time="2024-09-23T13:30:38.467744249Z" level=info msg="StartContainer for \"cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83\""
	Sep 23 13:30:38 addons-095355 containerd[814]: time="2024-09-23T13:30:38.520884183Z" level=info msg="StartContainer for \"cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83\" returns successfully"
	Sep 23 13:30:39 addons-095355 containerd[814]: time="2024-09-23T13:30:39.932848199Z" level=error msg="ExecSync for \"cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83\" failed" error="failed to exec in container: failed to start exec \"79c1896d40106d2b5732cb632f2651294fffbc77ea2c5709d03a0edba5785ce3\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 23 13:30:39 addons-095355 containerd[814]: time="2024-09-23T13:30:39.952652834Z" level=error msg="ExecSync for \"cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83\" failed" error="failed to exec in container: failed to start exec \"cdb087477b1764a68793f8c27228938bb6a2513dede435cf3ec22362eeeb912f\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 23 13:30:39 addons-095355 containerd[814]: time="2024-09-23T13:30:39.962836161Z" level=error msg="ExecSync for \"cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83\" failed" error="failed to exec in container: failed to start exec \"6fc7ded8b7313d3e5aa56771f99207a523eaab4be81f5e2ac8efea4c8850f8dc\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 23 13:30:40 addons-095355 containerd[814]: time="2024-09-23T13:30:40.120042785Z" level=info msg="shim disconnected" id=cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83 namespace=k8s.io
	Sep 23 13:30:40 addons-095355 containerd[814]: time="2024-09-23T13:30:40.120205029Z" level=warning msg="cleaning up after shim disconnected" id=cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83 namespace=k8s.io
	Sep 23 13:30:40 addons-095355 containerd[814]: time="2024-09-23T13:30:40.120217443Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 13:30:40 addons-095355 containerd[814]: time="2024-09-23T13:30:40.733709396Z" level=info msg="RemoveContainer for \"a29f6ef1f505ac94d629398307b78ff35e109d8cb6c9c625d7e0e1170ecc5d73\""
	Sep 23 13:30:40 addons-095355 containerd[814]: time="2024-09-23T13:30:40.749719286Z" level=info msg="RemoveContainer for \"a29f6ef1f505ac94d629398307b78ff35e109d8cb6c9c625d7e0e1170ecc5d73\" returns successfully"
	
	
	==> coredns [117874540153cf09fbd966e6c14bd1a4689db4d556c257956f6df48e7897b371] <==
	[INFO] 10.244.0.3:50949 - 23940 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000174683s
	[INFO] 10.244.0.3:36324 - 16817 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002899488s
	[INFO] 10.244.0.3:36324 - 57807 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002253334s
	[INFO] 10.244.0.3:33458 - 45869 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076609s
	[INFO] 10.244.0.3:33458 - 289 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083928s
	[INFO] 10.244.0.3:55237 - 12295 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000111751s
	[INFO] 10.244.0.3:55237 - 58634 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171991s
	[INFO] 10.244.0.3:56600 - 32738 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085175s
	[INFO] 10.244.0.3:56600 - 54497 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084896s
	[INFO] 10.244.0.3:60579 - 6627 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070751s
	[INFO] 10.244.0.3:60579 - 16097 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004493s
	[INFO] 10.244.0.3:50362 - 65070 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001538834s
	[INFO] 10.244.0.3:50362 - 25900 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001929051s
	[INFO] 10.244.0.3:33642 - 25320 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076125s
	[INFO] 10.244.0.3:33642 - 38126 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000064868s
	[INFO] 10.244.0.24:50939 - 25316 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000173362s
	[INFO] 10.244.0.24:42657 - 22508 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000192971s
	[INFO] 10.244.0.24:37526 - 22398 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016546s
	[INFO] 10.244.0.24:36893 - 49783 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139156s
	[INFO] 10.244.0.24:38119 - 38790 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135267s
	[INFO] 10.244.0.24:40174 - 7550 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137818s
	[INFO] 10.244.0.24:49628 - 31954 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002251834s
	[INFO] 10.244.0.24:40569 - 12627 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002119783s
	[INFO] 10.244.0.24:60816 - 7109 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001702727s
	[INFO] 10.244.0.24:39645 - 50916 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001535355s
	
	
	==> describe nodes <==
	Name:               addons-095355
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-095355
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-095355
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_24_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-095355
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-095355"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:24:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-095355
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:31:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:28:09 +0000   Mon, 23 Sep 2024 13:23:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:28:09 +0000   Mon, 23 Sep 2024 13:23:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:28:09 +0000   Mon, 23 Sep 2024 13:23:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:28:09 +0000   Mon, 23 Sep 2024 13:24:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-095355
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0a480698db94eb09e2022ce6b8a0393
	  System UUID:                553d4438-66c4-4682-9fc8-aaaa73241836
	  Boot ID:                    202f1c12-eb3b-4d2d-8c7a-af93b822fb33
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-8svqk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  gadget                      gadget-4bwtr                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  gcp-auth                    gcp-auth-89d5ffd79-vtqpk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-tp5j6    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m44s
	  kube-system                 coredns-7c65d6cfc9-vhmjq                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m52s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 csi-hostpathplugin-7272k                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 etcd-addons-095355                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m57s
	  kube-system                 kindnet-h4w8r                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m52s
	  kube-system                 kube-apiserver-addons-095355                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 kube-controller-manager-addons-095355       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m59s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 kube-proxy-7km75                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m52s
	  kube-system                 kube-scheduler-addons-095355                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 metrics-server-84c5f94fbc-8qvf4             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m47s
	  kube-system                 nvidia-device-plugin-daemonset-mm7dj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m50s
	  kube-system                 registry-66c9cd494c-k2d2s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 registry-proxy-mg8qm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 snapshot-controller-56fcc65765-4wq9p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 snapshot-controller-56fcc65765-spflc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  local-path-storage          local-path-provisioner-86d989889c-smkk7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  volcano-system              volcano-admission-77d7d48b68-fs748          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  volcano-system              volcano-controllers-56675bb4d5-zc2sm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  volcano-system              volcano-scheduler-576bc46687-h8fw7          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-w88np              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     6m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 6m50s  kube-proxy       
	  Normal   Starting                 6m57s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m57s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m57s  kubelet          Node addons-095355 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m57s  kubelet          Node addons-095355 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m57s  kubelet          Node addons-095355 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m53s  node-controller  Node addons-095355 event: Registered Node addons-095355 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [79e433b0ee7e58d260c36db8337ccc7fef72632b4f2387d3fbc23f0887e2bf6e] <==
	{"level":"info","ts":"2024-09-23T13:23:58.128600Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T13:23:58.128855Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-23T13:23:58.129065Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-23T13:23:58.130266Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T13:23:58.130470Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T13:23:58.375373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-23T13:23:58.375480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-23T13:23:58.375536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-23T13:23:58.375600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T13:23:58.375629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T13:23:58.375679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-23T13:23:58.375718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T13:23:58.379511Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-095355 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T13:23:58.379608Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:23:58.379671Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:23:58.379958Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:23:58.380938Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:23:58.382139Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T13:23:58.387900Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:23:58.389091Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T13:23:58.389236Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T13:23:58.403498Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T13:23:58.389589Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:23:58.407594Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:23:58.407784Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [4b74cc01f8b60c249f13662d067c22a4de5782f76152794da03e55f2bc69eed6] <==
	2024/09/23 13:27:41 GCP Auth Webhook started!
	2024/09/23 13:27:59 Ready to marshal response ...
	2024/09/23 13:27:59 Ready to write response ...
	2024/09/23 13:28:00 Ready to marshal response ...
	2024/09/23 13:28:00 Ready to write response ...
	
	
	==> kernel <==
	 13:31:01 up 1 day, 19:13,  0 users,  load average: 0.55, 1.23, 2.52
	Linux addons-095355 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [453ac10d8e637e74290c9afb48750726778f210226ed7613544f020d176c4965] <==
	I0923 13:29:00.723131       1 main.go:299] handling current node
	I0923 13:29:10.714597       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:29:10.714632       1 main.go:299] handling current node
	I0923 13:29:20.719421       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:29:20.719453       1 main.go:299] handling current node
	I0923 13:29:30.723318       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:29:30.723387       1 main.go:299] handling current node
	I0923 13:29:40.713864       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:29:40.713911       1 main.go:299] handling current node
	I0923 13:29:50.718980       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:29:50.719035       1 main.go:299] handling current node
	I0923 13:30:00.713636       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:30:00.713684       1 main.go:299] handling current node
	I0923 13:30:10.713670       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:30:10.713707       1 main.go:299] handling current node
	I0923 13:30:20.720279       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:30:20.720494       1 main.go:299] handling current node
	I0923 13:30:30.715009       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:30:30.715052       1 main.go:299] handling current node
	I0923 13:30:40.714781       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:30:40.714812       1 main.go:299] handling current node
	I0923 13:30:50.715422       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:30:50.715456       1 main.go:299] handling current node
	I0923 13:31:00.723624       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:31:00.723657       1 main.go:299] handling current node
	
	
	==> kube-apiserver [e5fd1db59e309b011f8e616c1acf24580885721fa1c254a28cb515a0944a2add] <==
	E0923 13:26:24.550142       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.0.129:443: connect: connection refused" logger="UnhandledError"
	W0923 13:26:24.551983       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:25.087077       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:26.186754       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:27.201190       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:28.253036       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:29.286216       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:30.342359       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:31.398174       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:32.402043       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:33.499771       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:34.521110       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:35.575447       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:36.621579       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:37.691486       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:38.733155       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:26:39.772922       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.175.157:443: connect: connection refused
	W0923 13:27:06.413452       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.0.129:443: connect: connection refused
	E0923 13:27:06.413496       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.0.129:443: connect: connection refused" logger="UnhandledError"
	W0923 13:27:24.497844       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.0.129:443: connect: connection refused
	E0923 13:27:24.497884       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.0.129:443: connect: connection refused" logger="UnhandledError"
	W0923 13:27:24.558141       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.0.129:443: connect: connection refused
	E0923 13:27:24.558182       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.0.129:443: connect: connection refused" logger="UnhandledError"
	I0923 13:27:59.461144       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0923 13:27:59.513114       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [f0254e1e34178bb2bff17935272ae02436680574ee44016bbb94c48f6199578b] <==
	I0923 13:27:24.568429       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 13:27:24.576902       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 13:27:24.581948       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 13:27:24.594558       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 13:27:25.181044       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 13:27:25.512641       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 13:27:26.184227       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 13:27:26.200105       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 13:27:26.520982       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 13:27:27.309731       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 13:27:27.338935       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 13:27:27.527105       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 13:27:27.536614       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 13:27:27.545364       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0923 13:27:28.315799       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 13:27:28.326299       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 13:27:28.336362       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0923 13:27:42.265560       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="13.366961ms"
	I0923 13:27:42.266079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="80.802µs"
	I0923 13:27:57.022466       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0923 13:27:57.064045       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0923 13:27:58.009705       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0923 13:27:58.046240       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0923 13:27:59.195428       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0923 13:28:09.631589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-095355"
	
	
	==> kube-proxy [9577676abdd22690cf03f8ac3d285e1bfdc113e6f18495fdd26af71e3c12194a] <==
	I0923 13:24:10.783190       1 server_linux.go:66] "Using iptables proxy"
	I0923 13:24:10.905444       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 13:24:10.905519       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:24:10.938000       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 13:24:10.938056       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:24:10.940179       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:24:10.941632       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:24:10.941649       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:24:10.957281       1 config.go:199] "Starting service config controller"
	I0923 13:24:10.964525       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:24:10.962684       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:24:10.965248       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:24:10.963922       1 config.go:328] "Starting node config controller"
	I0923 13:24:10.965274       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:24:11.065336       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:24:11.065389       1 shared_informer.go:320] Caches are synced for node config
	I0923 13:24:11.065403       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [cb5612e2e80c0efbb4b25793d9d6ce86881342573aaec81041cc574def229800] <==
	W0923 13:24:02.372408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 13:24:02.372451       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:02.404192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 13:24:02.404231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:02.476108       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:24:02.476147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:02.548241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 13:24:02.548289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:02.565768       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 13:24:02.565952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:02.565909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 13:24:02.566059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:02.631368       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:24:02.631420       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 13:24:02.689493       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:24:02.689746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:02.708129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 13:24:02.708172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:02.755648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 13:24:02.755689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:02.766306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 13:24:02.766520       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:24:02.807491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:24:02.807617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0923 13:24:05.234526       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:29:21 addons-095355 kubelet[1472]: E0923 13:29:21.288588    1472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4bwtr_gadget(119b26e6-b744-4543-96a4-112aa3284ecd)\"" pod="gadget/gadget-4bwtr" podUID="119b26e6-b744-4543-96a4-112aa3284ecd"
	Sep 23 13:29:34 addons-095355 kubelet[1472]: I0923 13:29:34.289298    1472 scope.go:117] "RemoveContainer" containerID="a29f6ef1f505ac94d629398307b78ff35e109d8cb6c9c625d7e0e1170ecc5d73"
	Sep 23 13:29:34 addons-095355 kubelet[1472]: E0923 13:29:34.289499    1472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4bwtr_gadget(119b26e6-b744-4543-96a4-112aa3284ecd)\"" pod="gadget/gadget-4bwtr" podUID="119b26e6-b744-4543-96a4-112aa3284ecd"
	Sep 23 13:29:45 addons-095355 kubelet[1472]: I0923 13:29:45.288502    1472 scope.go:117] "RemoveContainer" containerID="a29f6ef1f505ac94d629398307b78ff35e109d8cb6c9c625d7e0e1170ecc5d73"
	Sep 23 13:29:45 addons-095355 kubelet[1472]: E0923 13:29:45.288712    1472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4bwtr_gadget(119b26e6-b744-4543-96a4-112aa3284ecd)\"" pod="gadget/gadget-4bwtr" podUID="119b26e6-b744-4543-96a4-112aa3284ecd"
	Sep 23 13:29:57 addons-095355 kubelet[1472]: I0923 13:29:57.289041    1472 scope.go:117] "RemoveContainer" containerID="a29f6ef1f505ac94d629398307b78ff35e109d8cb6c9c625d7e0e1170ecc5d73"
	Sep 23 13:29:57 addons-095355 kubelet[1472]: E0923 13:29:57.289251    1472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4bwtr_gadget(119b26e6-b744-4543-96a4-112aa3284ecd)\"" pod="gadget/gadget-4bwtr" podUID="119b26e6-b744-4543-96a4-112aa3284ecd"
	Sep 23 13:30:05 addons-095355 kubelet[1472]: I0923 13:30:05.288840    1472 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mg8qm" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 13:30:09 addons-095355 kubelet[1472]: I0923 13:30:09.288199    1472 scope.go:117] "RemoveContainer" containerID="a29f6ef1f505ac94d629398307b78ff35e109d8cb6c9c625d7e0e1170ecc5d73"
	Sep 23 13:30:09 addons-095355 kubelet[1472]: E0923 13:30:09.288418    1472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4bwtr_gadget(119b26e6-b744-4543-96a4-112aa3284ecd)\"" pod="gadget/gadget-4bwtr" podUID="119b26e6-b744-4543-96a4-112aa3284ecd"
	Sep 23 13:30:12 addons-095355 kubelet[1472]: I0923 13:30:12.288792    1472 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-mm7dj" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 13:30:12 addons-095355 kubelet[1472]: I0923 13:30:12.289723    1472 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-k2d2s" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 13:30:24 addons-095355 kubelet[1472]: I0923 13:30:24.290091    1472 scope.go:117] "RemoveContainer" containerID="a29f6ef1f505ac94d629398307b78ff35e109d8cb6c9c625d7e0e1170ecc5d73"
	Sep 23 13:30:24 addons-095355 kubelet[1472]: E0923 13:30:24.290361    1472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4bwtr_gadget(119b26e6-b744-4543-96a4-112aa3284ecd)\"" pod="gadget/gadget-4bwtr" podUID="119b26e6-b744-4543-96a4-112aa3284ecd"
	Sep 23 13:30:38 addons-095355 kubelet[1472]: I0923 13:30:38.288976    1472 scope.go:117] "RemoveContainer" containerID="a29f6ef1f505ac94d629398307b78ff35e109d8cb6c9c625d7e0e1170ecc5d73"
	Sep 23 13:30:39 addons-095355 kubelet[1472]: E0923 13:30:39.933537    1472 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"79c1896d40106d2b5732cb632f2651294fffbc77ea2c5709d03a0edba5785ce3\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 13:30:39 addons-095355 kubelet[1472]: E0923 13:30:39.952924    1472 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"cdb087477b1764a68793f8c27228938bb6a2513dede435cf3ec22362eeeb912f\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 13:30:39 addons-095355 kubelet[1472]: E0923 13:30:39.963069    1472 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"6fc7ded8b7313d3e5aa56771f99207a523eaab4be81f5e2ac8efea4c8850f8dc\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 13:30:40 addons-095355 kubelet[1472]: I0923 13:30:40.717036    1472 scope.go:117] "RemoveContainer" containerID="a29f6ef1f505ac94d629398307b78ff35e109d8cb6c9c625d7e0e1170ecc5d73"
	Sep 23 13:30:40 addons-095355 kubelet[1472]: I0923 13:30:40.717686    1472 scope.go:117] "RemoveContainer" containerID="cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83"
	Sep 23 13:30:40 addons-095355 kubelet[1472]: E0923 13:30:40.717931    1472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-4bwtr_gadget(119b26e6-b744-4543-96a4-112aa3284ecd)\"" pod="gadget/gadget-4bwtr" podUID="119b26e6-b744-4543-96a4-112aa3284ecd"
	Sep 23 13:30:41 addons-095355 kubelet[1472]: I0923 13:30:41.721951    1472 scope.go:117] "RemoveContainer" containerID="cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83"
	Sep 23 13:30:41 addons-095355 kubelet[1472]: E0923 13:30:41.722206    1472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-4bwtr_gadget(119b26e6-b744-4543-96a4-112aa3284ecd)\"" pod="gadget/gadget-4bwtr" podUID="119b26e6-b744-4543-96a4-112aa3284ecd"
	Sep 23 13:30:54 addons-095355 kubelet[1472]: I0923 13:30:54.289060    1472 scope.go:117] "RemoveContainer" containerID="cab4b74a94b7c0af36d620c9d52ec282c63600eb7c86aea86525eca786fada83"
	Sep 23 13:30:54 addons-095355 kubelet[1472]: E0923 13:30:54.289700    1472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-4bwtr_gadget(119b26e6-b744-4543-96a4-112aa3284ecd)\"" pod="gadget/gadget-4bwtr" podUID="119b26e6-b744-4543-96a4-112aa3284ecd"
	
	
	==> storage-provisioner [aeab809eb8a9f51a5c12f06aa768a738238ede84b6359f3c761a5cc71e92064a] <==
	I0923 13:24:15.457886       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 13:24:15.537270       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 13:24:15.564976       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 13:24:15.606864       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 13:24:15.607048       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-095355_12211397-e7a7-4975-970e-8ab048114392!
	I0923 13:24:15.640296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"39abf670-955d-4520-8ae1-cb1007e6f1ac", APIVersion:"v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-095355_12211397-e7a7-4975-970e-8ab048114392 became leader
	I0923 13:24:15.707281       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-095355_12211397-e7a7-4975-970e-8ab048114392!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-095355 -n addons-095355
helpers_test.go:261: (dbg) Run:  kubectl --context addons-095355 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-mld5x ingress-nginx-admission-patch-nf8lt test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-095355 describe pod ingress-nginx-admission-create-mld5x ingress-nginx-admission-patch-nf8lt test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-095355 describe pod ingress-nginx-admission-create-mld5x ingress-nginx-admission-patch-nf8lt test-job-nginx-0: exit status 1 (86.910653ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mld5x" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-nf8lt" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-095355 describe pod ingress-nginx-admission-create-mld5x ingress-nginx-admission-patch-nf8lt test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (374.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-545656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0923 14:20:39.429722 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:20:56.775994 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:17.507427 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:17.514248 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:17.525630 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:17.547011 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:17.588481 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:17.669898 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:17.831706 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:18.153079 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:18.795098 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:20.076839 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:22.639211 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:27.760961 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:30.793992 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:30.800546 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:30.811979 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:30.833480 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:30.874925 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:30.956377 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:31.118009 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:31.440260 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:32.082158 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:33.364152 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:35.925614 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:38.003604 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:41.047846 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:44.376299 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:50.070234 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:51.289739 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:58.493285 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:11.772139 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:12.811875 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:18.697683 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:39.454590 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:40.525823 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:42.992964 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:45.206288 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:45.212994 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:45.224620 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:45.246493 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:45.288056 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:45.369469 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:45.531030 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:45.853286 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:46.495409 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:47.776957 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:50.338914 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:52.734398 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:55.461025 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:55.564667 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:05.702888 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:23.271450 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:26.184385 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:00.516353 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:01.376368 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:07.146393 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-545656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m10.453301599s)

                                                
                                                
-- stdout --
	* [old-k8s-version-545656] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-545656" primary control-plane node in "old-k8s-version-545656" cluster
	* Pulling base image v0.0.45-1726784731-19672 ...
	* Restarting existing docker container for "old-k8s-version-545656" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-545656 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 14:20:25.100133 1281940 out.go:345] Setting OutFile to fd 1 ...
	I0923 14:20:25.100277 1281940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:20:25.100288 1281940 out.go:358] Setting ErrFile to fd 2...
	I0923 14:20:25.100295 1281940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:20:25.100572 1281940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	I0923 14:20:25.101020 1281940 out.go:352] Setting JSON to false
	I0923 14:20:25.102327 1281940 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":158571,"bootTime":1726942654,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0923 14:20:25.102416 1281940 start.go:139] virtualization:  
	I0923 14:20:25.105999 1281940 out.go:177] * [old-k8s-version-545656] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 14:20:25.108689 1281940 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 14:20:25.108815 1281940 notify.go:220] Checking for updates...
	I0923 14:20:25.117197 1281940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 14:20:25.119960 1281940 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 14:20:25.122594 1281940 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	I0923 14:20:25.125094 1281940 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 14:20:25.127573 1281940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 14:20:25.130867 1281940 config.go:182] Loaded profile config "old-k8s-version-545656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0923 14:20:25.134037 1281940 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 14:20:25.136582 1281940 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 14:20:25.161343 1281940 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 14:20:25.161463 1281940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 14:20:25.226113 1281940 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 14:20:25.215201769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 14:20:25.226229 1281940 docker.go:318] overlay module found
	I0923 14:20:25.229256 1281940 out.go:177] * Using the docker driver based on existing profile
	I0923 14:20:25.231758 1281940 start.go:297] selected driver: docker
	I0923 14:20:25.231786 1281940 start.go:901] validating driver "docker" against &{Name:old-k8s-version-545656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-545656 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 14:20:25.231898 1281940 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 14:20:25.232544 1281940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 14:20:25.284543 1281940 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 14:20:25.275274126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 14:20:25.285017 1281940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 14:20:25.285049 1281940 cni.go:84] Creating CNI manager for ""
	I0923 14:20:25.285086 1281940 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 14:20:25.285131 1281940 start.go:340] cluster config:
	{Name:old-k8s-version-545656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-545656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 14:20:25.288282 1281940 out.go:177] * Starting "old-k8s-version-545656" primary control-plane node in "old-k8s-version-545656" cluster
	I0923 14:20:25.290966 1281940 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 14:20:25.293746 1281940 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 14:20:25.296569 1281940 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0923 14:20:25.296631 1281940 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0923 14:20:25.296643 1281940 cache.go:56] Caching tarball of preloaded images
	I0923 14:20:25.296662 1281940 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 14:20:25.296753 1281940 preload.go:172] Found /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 14:20:25.296765 1281940 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0923 14:20:25.296885 1281940 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/config.json ...
	I0923 14:20:25.314378 1281940 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 14:20:25.314401 1281940 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 14:20:25.314420 1281940 cache.go:194] Successfully downloaded all kic artifacts
	I0923 14:20:25.314444 1281940 start.go:360] acquireMachinesLock for old-k8s-version-545656: {Name:mkf56f3b03e507f9b0d9cfe3ac6e0b9815015f41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 14:20:25.314503 1281940 start.go:364] duration metric: took 37.489µs to acquireMachinesLock for "old-k8s-version-545656"
	I0923 14:20:25.314528 1281940 start.go:96] Skipping create...Using existing machine configuration
	I0923 14:20:25.314538 1281940 fix.go:54] fixHost starting: 
	I0923 14:20:25.314826 1281940 cli_runner.go:164] Run: docker container inspect old-k8s-version-545656 --format={{.State.Status}}
	I0923 14:20:25.331396 1281940 fix.go:112] recreateIfNeeded on old-k8s-version-545656: state=Stopped err=<nil>
	W0923 14:20:25.331427 1281940 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 14:20:25.334327 1281940 out.go:177] * Restarting existing docker container for "old-k8s-version-545656" ...
	I0923 14:20:25.337148 1281940 cli_runner.go:164] Run: docker start old-k8s-version-545656
	I0923 14:20:25.687131 1281940 cli_runner.go:164] Run: docker container inspect old-k8s-version-545656 --format={{.State.Status}}
	I0923 14:20:25.720256 1281940 kic.go:430] container "old-k8s-version-545656" state is running.
	I0923 14:20:25.720741 1281940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-545656
	I0923 14:20:25.743183 1281940 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/config.json ...
	I0923 14:20:25.743593 1281940 machine.go:93] provisionDockerMachine start ...
	I0923 14:20:25.743672 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:25.768918 1281940 main.go:141] libmachine: Using SSH client type: native
	I0923 14:20:25.769235 1281940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41790 <nil> <nil>}
	I0923 14:20:25.769249 1281940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 14:20:25.771633 1281940 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0923 14:20:28.911531 1281940 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-545656
	
	I0923 14:20:28.911560 1281940 ubuntu.go:169] provisioning hostname "old-k8s-version-545656"
	I0923 14:20:28.911635 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:28.929940 1281940 main.go:141] libmachine: Using SSH client type: native
	I0923 14:20:28.930192 1281940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41790 <nil> <nil>}
	I0923 14:20:28.930211 1281940 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-545656 && echo "old-k8s-version-545656" | sudo tee /etc/hostname
	I0923 14:20:29.079580 1281940 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-545656
	
	I0923 14:20:29.079672 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:29.096356 1281940 main.go:141] libmachine: Using SSH client type: native
	I0923 14:20:29.096605 1281940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41790 <nil> <nil>}
	I0923 14:20:29.096631 1281940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-545656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-545656/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-545656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 14:20:29.231423 1281940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 14:20:29.231449 1281940 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19690-1028234/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-1028234/.minikube}
	I0923 14:20:29.231473 1281940 ubuntu.go:177] setting up certificates
	I0923 14:20:29.231483 1281940 provision.go:84] configureAuth start
	I0923 14:20:29.231543 1281940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-545656
	I0923 14:20:29.249123 1281940 provision.go:143] copyHostCerts
	I0923 14:20:29.249200 1281940 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.pem, removing ...
	I0923 14:20:29.249216 1281940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.pem
	I0923 14:20:29.249296 1281940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.pem (1082 bytes)
	I0923 14:20:29.249454 1281940 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-1028234/.minikube/cert.pem, removing ...
	I0923 14:20:29.249465 1281940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-1028234/.minikube/cert.pem
	I0923 14:20:29.249497 1281940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-1028234/.minikube/cert.pem (1123 bytes)
	I0923 14:20:29.249566 1281940 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-1028234/.minikube/key.pem, removing ...
	I0923 14:20:29.249575 1281940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-1028234/.minikube/key.pem
	I0923 14:20:29.249603 1281940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-1028234/.minikube/key.pem (1675 bytes)
	I0923 14:20:29.249665 1281940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-545656 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-545656]
	I0923 14:20:29.871283 1281940 provision.go:177] copyRemoteCerts
	I0923 14:20:29.871376 1281940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 14:20:29.871465 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:29.888729 1281940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41790 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/old-k8s-version-545656/id_rsa Username:docker}
	I0923 14:20:29.984533 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 14:20:30.029595 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0923 14:20:30.063739 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 14:20:30.095738 1281940 provision.go:87] duration metric: took 864.237525ms to configureAuth
	I0923 14:20:30.095783 1281940 ubuntu.go:193] setting minikube options for container-runtime
	I0923 14:20:30.096010 1281940 config.go:182] Loaded profile config "old-k8s-version-545656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0923 14:20:30.096025 1281940 machine.go:96] duration metric: took 4.352415799s to provisionDockerMachine
	I0923 14:20:30.096034 1281940 start.go:293] postStartSetup for "old-k8s-version-545656" (driver="docker")
	I0923 14:20:30.096046 1281940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 14:20:30.096115 1281940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 14:20:30.096181 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:30.124039 1281940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41790 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/old-k8s-version-545656/id_rsa Username:docker}
	I0923 14:20:30.220821 1281940 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 14:20:30.224186 1281940 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 14:20:30.224225 1281940 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 14:20:30.224236 1281940 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 14:20:30.224244 1281940 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 14:20:30.224254 1281940 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-1028234/.minikube/addons for local assets ...
	I0923 14:20:30.224310 1281940 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-1028234/.minikube/files for local assets ...
	I0923 14:20:30.224405 1281940 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-1028234/.minikube/files/etc/ssl/certs/10336162.pem -> 10336162.pem in /etc/ssl/certs
	I0923 14:20:30.224514 1281940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 14:20:30.233447 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/files/etc/ssl/certs/10336162.pem --> /etc/ssl/certs/10336162.pem (1708 bytes)
	I0923 14:20:30.259309 1281940 start.go:296] duration metric: took 163.257395ms for postStartSetup
	I0923 14:20:30.259542 1281940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 14:20:30.259593 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:30.279071 1281940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41790 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/old-k8s-version-545656/id_rsa Username:docker}
	I0923 14:20:30.376231 1281940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 14:20:30.381178 1281940 fix.go:56] duration metric: took 5.066631485s for fixHost
	I0923 14:20:30.381206 1281940 start.go:83] releasing machines lock for "old-k8s-version-545656", held for 5.066688214s
	I0923 14:20:30.381278 1281940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-545656
	I0923 14:20:30.399969 1281940 ssh_runner.go:195] Run: cat /version.json
	I0923 14:20:30.399982 1281940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 14:20:30.400023 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:30.400052 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:30.420964 1281940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41790 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/old-k8s-version-545656/id_rsa Username:docker}
	I0923 14:20:30.425429 1281940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41790 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/old-k8s-version-545656/id_rsa Username:docker}
	I0923 14:20:30.647410 1281940 ssh_runner.go:195] Run: systemctl --version
	I0923 14:20:30.652443 1281940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 14:20:30.657398 1281940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 14:20:30.692850 1281940 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 14:20:30.692977 1281940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 14:20:30.702247 1281940 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 14:20:30.702274 1281940 start.go:495] detecting cgroup driver to use...
	I0923 14:20:30.702326 1281940 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 14:20:30.702385 1281940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 14:20:30.716853 1281940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 14:20:30.729117 1281940 docker.go:217] disabling cri-docker service (if available) ...
	I0923 14:20:30.729185 1281940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 14:20:30.742590 1281940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 14:20:30.754705 1281940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 14:20:30.857178 1281940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 14:20:30.951653 1281940 docker.go:233] disabling docker service ...
	I0923 14:20:30.951751 1281940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 14:20:30.970489 1281940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 14:20:30.986803 1281940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 14:20:31.089248 1281940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 14:20:31.211516 1281940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 14:20:31.225763 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 14:20:31.244574 1281940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0923 14:20:31.255429 1281940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 14:20:31.266796 1281940 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 14:20:31.266920 1281940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 14:20:31.279176 1281940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 14:20:31.290549 1281940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 14:20:31.302369 1281940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 14:20:31.312106 1281940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 14:20:31.321608 1281940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 14:20:31.333489 1281940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 14:20:31.342616 1281940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 14:20:31.352571 1281940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 14:20:31.447048 1281940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 14:20:31.619195 1281940 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0923 14:20:31.619285 1281940 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0923 14:20:31.623665 1281940 start.go:563] Will wait 60s for crictl version
	I0923 14:20:31.623778 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:20:31.627382 1281940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 14:20:31.666389 1281940 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0923 14:20:31.666503 1281940 ssh_runner.go:195] Run: containerd --version
	I0923 14:20:31.699150 1281940 ssh_runner.go:195] Run: containerd --version
	I0923 14:20:31.725375 1281940 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0923 14:20:31.727473 1281940 cli_runner.go:164] Run: docker network inspect old-k8s-version-545656 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 14:20:31.743128 1281940 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0923 14:20:31.746778 1281940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 14:20:31.757815 1281940 kubeadm.go:883] updating cluster {Name:old-k8s-version-545656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-545656 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 14:20:31.757961 1281940 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0923 14:20:31.758031 1281940 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 14:20:31.796701 1281940 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 14:20:31.796724 1281940 containerd.go:534] Images already preloaded, skipping extraction
	I0923 14:20:31.796789 1281940 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 14:20:31.835460 1281940 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 14:20:31.835487 1281940 cache_images.go:84] Images are preloaded, skipping loading
	I0923 14:20:31.835495 1281940 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0923 14:20:31.835707 1281940 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-545656 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-545656 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 14:20:31.835783 1281940 ssh_runner.go:195] Run: sudo crictl info
	I0923 14:20:31.879658 1281940 cni.go:84] Creating CNI manager for ""
	I0923 14:20:31.879684 1281940 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 14:20:31.879694 1281940 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 14:20:31.879715 1281940 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-545656 NodeName:old-k8s-version-545656 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0923 14:20:31.879853 1281940 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-545656"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 14:20:31.879930 1281940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0923 14:20:31.889218 1281940 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 14:20:31.889292 1281940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 14:20:31.898940 1281940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0923 14:20:31.918025 1281940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 14:20:31.936853 1281940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0923 14:20:31.955694 1281940 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0923 14:20:31.959531 1281940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 14:20:31.970588 1281940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 14:20:32.066555 1281940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 14:20:32.084032 1281940 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656 for IP: 192.168.85.2
	I0923 14:20:32.084061 1281940 certs.go:194] generating shared ca certs ...
	I0923 14:20:32.084079 1281940 certs.go:226] acquiring lock for ca certs: {Name:mk03d32b578b2438d161be017440f804f69b681b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 14:20:32.084228 1281940 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.key
	I0923 14:20:32.084287 1281940 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.key
	I0923 14:20:32.084299 1281940 certs.go:256] generating profile certs ...
	I0923 14:20:32.084387 1281940 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.key
	I0923 14:20:32.084469 1281940 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/apiserver.key.c36474c9
	I0923 14:20:32.084524 1281940 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/proxy-client.key
	I0923 14:20:32.084640 1281940 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/1033616.pem (1338 bytes)
	W0923 14:20:32.084676 1281940 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/1033616_empty.pem, impossibly tiny 0 bytes
	I0923 14:20:32.084700 1281940 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 14:20:32.084726 1281940 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem (1082 bytes)
	I0923 14:20:32.084753 1281940 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/cert.pem (1123 bytes)
	I0923 14:20:32.084777 1281940 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/key.pem (1675 bytes)
	I0923 14:20:32.084825 1281940 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/files/etc/ssl/certs/10336162.pem (1708 bytes)
	I0923 14:20:32.085467 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 14:20:32.137997 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 14:20:32.166517 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 14:20:32.193245 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 14:20:32.220988 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 14:20:32.252234 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 14:20:32.279188 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 14:20:32.304188 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 14:20:32.328875 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/files/etc/ssl/certs/10336162.pem --> /usr/share/ca-certificates/10336162.pem (1708 bytes)
	I0923 14:20:32.354050 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 14:20:32.379153 1281940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/1033616.pem --> /usr/share/ca-certificates/1033616.pem (1338 bytes)
	I0923 14:20:32.405068 1281940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 14:20:32.427562 1281940 ssh_runner.go:195] Run: openssl version
	I0923 14:20:32.433951 1281940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10336162.pem && ln -fs /usr/share/ca-certificates/10336162.pem /etc/ssl/certs/10336162.pem"
	I0923 14:20:32.443601 1281940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10336162.pem
	I0923 14:20:32.447094 1281940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 13:34 /usr/share/ca-certificates/10336162.pem
	I0923 14:20:32.447192 1281940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10336162.pem
	I0923 14:20:32.454082 1281940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10336162.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 14:20:32.463366 1281940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 14:20:32.472945 1281940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 14:20:32.476353 1281940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 13:23 /usr/share/ca-certificates/minikubeCA.pem
	I0923 14:20:32.476413 1281940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 14:20:32.483132 1281940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 14:20:32.492717 1281940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1033616.pem && ln -fs /usr/share/ca-certificates/1033616.pem /etc/ssl/certs/1033616.pem"
	I0923 14:20:32.502289 1281940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1033616.pem
	I0923 14:20:32.505836 1281940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 13:34 /usr/share/ca-certificates/1033616.pem
	I0923 14:20:32.505931 1281940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1033616.pem
	I0923 14:20:32.513232 1281940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1033616.pem /etc/ssl/certs/51391683.0"
	I0923 14:20:32.522477 1281940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 14:20:32.526142 1281940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 14:20:32.533008 1281940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 14:20:32.540673 1281940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 14:20:32.547643 1281940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 14:20:32.554632 1281940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 14:20:32.561613 1281940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 14:20:32.568975 1281940 kubeadm.go:392] StartCluster: {Name:old-k8s-version-545656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-545656 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 14:20:32.569120 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0923 14:20:32.569191 1281940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 14:20:32.608278 1281940 cri.go:89] found id: "a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2"
	I0923 14:20:32.608305 1281940 cri.go:89] found id: "1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff"
	I0923 14:20:32.608311 1281940 cri.go:89] found id: "f067863cccf3c327d51b4d06cd4c99aa7abd449fb53f8bd47167f3208edc5a70"
	I0923 14:20:32.608315 1281940 cri.go:89] found id: "887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025"
	I0923 14:20:32.608319 1281940 cri.go:89] found id: "b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef"
	I0923 14:20:32.608322 1281940 cri.go:89] found id: "47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76"
	I0923 14:20:32.608326 1281940 cri.go:89] found id: "be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf"
	I0923 14:20:32.608329 1281940 cri.go:89] found id: "84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629"
	I0923 14:20:32.608332 1281940 cri.go:89] found id: ""
	I0923 14:20:32.608390 1281940 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0923 14:20:32.625193 1281940 cri.go:116] JSON = null
	W0923 14:20:32.625251 1281940 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0923 14:20:32.625329 1281940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 14:20:32.639800 1281940 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 14:20:32.639819 1281940 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 14:20:32.639875 1281940 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 14:20:32.648737 1281940 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 14:20:32.649531 1281940 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-545656" does not appear in /home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 14:20:32.649877 1281940 kubeconfig.go:62] /home/jenkins/minikube-integration/19690-1028234/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-545656" cluster setting kubeconfig missing "old-k8s-version-545656" context setting]
	I0923 14:20:32.650433 1281940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/kubeconfig: {Name:mkd806df25aca780e43239d5b6c8b09e764ab897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 14:20:32.652210 1281940 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 14:20:32.661483 1281940 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0923 14:20:32.661562 1281940 kubeadm.go:597] duration metric: took 21.736336ms to restartPrimaryControlPlane
	I0923 14:20:32.661587 1281940 kubeadm.go:394] duration metric: took 92.620026ms to StartCluster
	I0923 14:20:32.661632 1281940 settings.go:142] acquiring lock: {Name:mk31b92312dde44fbd825c77a82e5dececb66fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 14:20:32.661725 1281940 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 14:20:32.662783 1281940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/kubeconfig: {Name:mkd806df25aca780e43239d5b6c8b09e764ab897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 14:20:32.663046 1281940 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 14:20:32.663571 1281940 config.go:182] Loaded profile config "old-k8s-version-545656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0923 14:20:32.663626 1281940 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 14:20:32.663753 1281940 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-545656"
	I0923 14:20:32.663770 1281940 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-545656"
	I0923 14:20:32.663773 1281940 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-545656"
	I0923 14:20:32.663784 1281940 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-545656"
	I0923 14:20:32.663789 1281940 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-545656"
	I0923 14:20:32.663795 1281940 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-545656"
	W0923 14:20:32.663801 1281940 addons.go:243] addon metrics-server should already be in state true
	I0923 14:20:32.663824 1281940 host.go:66] Checking if "old-k8s-version-545656" exists ...
	I0923 14:20:32.664183 1281940 cli_runner.go:164] Run: docker container inspect old-k8s-version-545656 --format={{.State.Status}}
	I0923 14:20:32.664348 1281940 cli_runner.go:164] Run: docker container inspect old-k8s-version-545656 --format={{.State.Status}}
	I0923 14:20:32.664895 1281940 addons.go:69] Setting dashboard=true in profile "old-k8s-version-545656"
	I0923 14:20:32.664918 1281940 addons.go:234] Setting addon dashboard=true in "old-k8s-version-545656"
	W0923 14:20:32.664925 1281940 addons.go:243] addon dashboard should already be in state true
	I0923 14:20:32.664955 1281940 host.go:66] Checking if "old-k8s-version-545656" exists ...
	W0923 14:20:32.663777 1281940 addons.go:243] addon storage-provisioner should already be in state true
	I0923 14:20:32.665066 1281940 host.go:66] Checking if "old-k8s-version-545656" exists ...
	I0923 14:20:32.665387 1281940 cli_runner.go:164] Run: docker container inspect old-k8s-version-545656 --format={{.State.Status}}
	I0923 14:20:32.665558 1281940 cli_runner.go:164] Run: docker container inspect old-k8s-version-545656 --format={{.State.Status}}
	I0923 14:20:32.673604 1281940 out.go:177] * Verifying Kubernetes components...
	I0923 14:20:32.676144 1281940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 14:20:32.728587 1281940 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 14:20:32.732563 1281940 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 14:20:32.732590 1281940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 14:20:32.732657 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:32.740419 1281940 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-545656"
	W0923 14:20:32.740444 1281940 addons.go:243] addon default-storageclass should already be in state true
	I0923 14:20:32.740469 1281940 host.go:66] Checking if "old-k8s-version-545656" exists ...
	I0923 14:20:32.740913 1281940 cli_runner.go:164] Run: docker container inspect old-k8s-version-545656 --format={{.State.Status}}
	I0923 14:20:32.741110 1281940 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0923 14:20:32.745096 1281940 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0923 14:20:32.745536 1281940 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0923 14:20:32.751290 1281940 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 14:20:32.751315 1281940 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 14:20:32.751497 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:32.751498 1281940 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0923 14:20:32.751513 1281940 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0923 14:20:32.751562 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:32.787280 1281940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41790 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/old-k8s-version-545656/id_rsa Username:docker}
	I0923 14:20:32.787975 1281940 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 14:20:32.787991 1281940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 14:20:32.788044 1281940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-545656
	I0923 14:20:32.814304 1281940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41790 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/old-k8s-version-545656/id_rsa Username:docker}
	I0923 14:20:32.818815 1281940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41790 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/old-k8s-version-545656/id_rsa Username:docker}
	I0923 14:20:32.836714 1281940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41790 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/old-k8s-version-545656/id_rsa Username:docker}
	I0923 14:20:32.879708 1281940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 14:20:32.923892 1281940 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-545656" to be "Ready" ...
	I0923 14:20:32.983562 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 14:20:32.989583 1281940 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0923 14:20:32.989611 1281940 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0923 14:20:33.025847 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 14:20:33.034249 1281940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 14:20:33.034286 1281940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0923 14:20:33.055250 1281940 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0923 14:20:33.055284 1281940 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0923 14:20:33.066137 1281940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 14:20:33.066163 1281940 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 14:20:33.105890 1281940 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0923 14:20:33.105919 1281940 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0923 14:20:33.113373 1281940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 14:20:33.113408 1281940 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 14:20:33.159178 1281940 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0923 14:20:33.159205 1281940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0923 14:20:33.174510 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 14:20:33.214132 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.214185 1281940 retry.go:31] will retry after 257.709856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.230167 1281940 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0923 14:20:33.230204 1281940 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0923 14:20:33.252449 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.252484 1281940 retry.go:31] will retry after 263.750947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.258378 1281940 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0923 14:20:33.258406 1281940 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0923 14:20:33.290977 1281940 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0923 14:20:33.291005 1281940 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0923 14:20:33.316876 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.316915 1281940 retry.go:31] will retry after 367.631428ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.321907 1281940 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0923 14:20:33.321933 1281940 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0923 14:20:33.341925 1281940 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 14:20:33.341953 1281940 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0923 14:20:33.362860 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0923 14:20:33.437063 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.437146 1281940 retry.go:31] will retry after 173.662804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.472330 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 14:20:33.516661 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0923 14:20:33.566258 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.566289 1281940 retry.go:31] will retry after 247.302318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 14:20:33.599660 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.599690 1281940 retry.go:31] will retry after 485.915161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.611855 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0923 14:20:33.684834 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.684866 1281940 retry.go:31] will retry after 560.723702ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.684970 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 14:20:33.760424 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.760508 1281940 retry.go:31] will retry after 395.939295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.814678 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0923 14:20:33.891396 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:33.891441 1281940 retry.go:31] will retry after 358.378569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:34.086315 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 14:20:34.156609 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 14:20:34.170739 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:34.170850 1281940 retry.go:31] will retry after 429.622716ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 14:20:34.243846 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:34.243877 1281940 retry.go:31] will retry after 689.90763ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:34.246006 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 14:20:34.250240 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0923 14:20:34.349446 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:34.349491 1281940 retry.go:31] will retry after 571.159624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 14:20:34.349525 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:34.349537 1281940 retry.go:31] will retry after 524.15205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:34.601568 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0923 14:20:34.682760 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:34.682795 1281940 retry.go:31] will retry after 1.254087419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:34.874272 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 14:20:34.920918 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 14:20:34.924762 1281940 node_ready.go:53] error getting node "old-k8s-version-545656": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-545656": dial tcp 192.168.85.2:8443: connect: connection refused
	I0923 14:20:34.934978 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 14:20:34.976439 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:34.976475 1281940 retry.go:31] will retry after 1.133107072s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 14:20:35.046549 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:35.046586 1281940 retry.go:31] will retry after 867.381848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 14:20:35.058508 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:35.058587 1281940 retry.go:31] will retry after 1.121941467s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:35.914157 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 14:20:35.937059 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0923 14:20:36.014217 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:36.014254 1281940 retry.go:31] will retry after 1.401301911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 14:20:36.034630 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:36.034666 1281940 retry.go:31] will retry after 1.269473804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:36.109835 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 14:20:36.181571 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 14:20:36.187267 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:36.187355 1281940 retry.go:31] will retry after 2.336871372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 14:20:36.258957 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:36.259031 1281940 retry.go:31] will retry after 1.267055641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:36.925434 1281940 node_ready.go:53] error getting node "old-k8s-version-545656": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-545656": dial tcp 192.168.85.2:8443: connect: connection refused
	I0923 14:20:37.305028 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0923 14:20:37.384983 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:37.385015 1281940 retry.go:31] will retry after 1.023496485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:37.416331 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0923 14:20:37.498074 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:37.498114 1281940 retry.go:31] will retry after 2.616468719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:37.527221 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 14:20:37.601340 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:37.601382 1281940 retry.go:31] will retry after 1.032304668s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:38.409683 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0923 14:20:38.486456 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:38.486487 1281940 retry.go:31] will retry after 4.011936916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:38.524797 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0923 14:20:38.605722 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:38.605759 1281940 retry.go:31] will retry after 2.627231819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:38.633877 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 14:20:38.722691 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:38.722726 1281940 retry.go:31] will retry after 2.576999038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:39.424386 1281940 node_ready.go:53] error getting node "old-k8s-version-545656": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-545656": dial tcp 192.168.85.2:8443: connect: connection refused
	I0923 14:20:40.114923 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0923 14:20:40.188018 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:40.188052 1281940 retry.go:31] will retry after 3.014200233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:41.233938 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 14:20:41.300225 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 14:20:41.424569 1281940 node_ready.go:53] error getting node "old-k8s-version-545656": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-545656": dial tcp 192.168.85.2:8443: connect: connection refused
	W0923 14:20:41.445005 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:41.445043 1281940 retry.go:31] will retry after 2.545194214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 14:20:41.642391 1281940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:41.642453 1281940 retry.go:31] will retry after 3.026894814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 14:20:42.499284 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 14:20:43.203202 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 14:20:43.991194 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 14:20:44.669783 1281940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 14:20:49.455945 1281940 node_ready.go:49] node "old-k8s-version-545656" has status "Ready":"True"
	I0923 14:20:49.455971 1281940 node_ready.go:38] duration metric: took 16.532044561s for node "old-k8s-version-545656" to be "Ready" ...
	I0923 14:20:49.455982 1281940 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 14:20:49.488815 1281940 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-t25c8" in "kube-system" namespace to be "Ready" ...
	I0923 14:20:49.621281 1281940 pod_ready.go:93] pod "coredns-74ff55c5b-t25c8" in "kube-system" namespace has status "Ready":"True"
	I0923 14:20:49.621355 1281940 pod_ready.go:82] duration metric: took 132.443813ms for pod "coredns-74ff55c5b-t25c8" in "kube-system" namespace to be "Ready" ...
	I0923 14:20:49.621379 1281940 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-545656" in "kube-system" namespace to be "Ready" ...
	I0923 14:20:49.716114 1281940 pod_ready.go:93] pod "etcd-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"True"
	I0923 14:20:49.716136 1281940 pod_ready.go:82] duration metric: took 94.737325ms for pod "etcd-old-k8s-version-545656" in "kube-system" namespace to be "Ready" ...
	I0923 14:20:49.716151 1281940 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-545656" in "kube-system" namespace to be "Ready" ...
	I0923 14:20:50.298934 1281940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.799609549s)
	I0923 14:20:50.607548 1281940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.616314169s)
	I0923 14:20:50.607643 1281940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.937831629s)
	I0923 14:20:50.607661 1281940 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-545656"
	I0923 14:20:50.607732 1281940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.404492143s)
	I0923 14:20:50.610363 1281940 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-545656 addons enable metrics-server
	
	I0923 14:20:50.613221 1281940 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0923 14:20:50.616090 1281940 addons.go:510] duration metric: took 17.952463562s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0923 14:20:51.725643 1281940 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:20:54.222984 1281940 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:20:56.722683 1281940 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:20:58.723902 1281940 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:01.222877 1281940 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"True"
	I0923 14:21:01.222906 1281940 pod_ready.go:82] duration metric: took 11.5067456s for pod "kube-apiserver-old-k8s-version-545656" in "kube-system" namespace to be "Ready" ...
	I0923 14:21:01.222920 1281940 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace to be "Ready" ...
	I0923 14:21:03.229930 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:05.230409 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:07.739819 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:10.232341 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:12.731238 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:15.230283 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:17.730356 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:20.229814 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:22.231504 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:24.729524 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:26.730584 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:29.229463 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:31.229780 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:33.733780 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:36.230012 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:38.230436 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:40.730255 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:43.230170 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:45.235179 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:47.730192 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:49.730645 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:52.229640 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:54.230162 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:56.729831 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:21:58.732918 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:01.230131 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:03.729468 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:06.230252 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:08.231418 1281940 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:10.730929 1281940 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"True"
	I0923 14:22:10.730958 1281940 pod_ready.go:82] duration metric: took 1m9.508029905s for pod "kube-controller-manager-old-k8s-version-545656" in "kube-system" namespace to be "Ready" ...
	I0923 14:22:10.730971 1281940 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q9njx" in "kube-system" namespace to be "Ready" ...
	I0923 14:22:10.736071 1281940 pod_ready.go:93] pod "kube-proxy-q9njx" in "kube-system" namespace has status "Ready":"True"
	I0923 14:22:10.736102 1281940 pod_ready.go:82] duration metric: took 5.122916ms for pod "kube-proxy-q9njx" in "kube-system" namespace to be "Ready" ...
	I0923 14:22:10.736115 1281940 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-545656" in "kube-system" namespace to be "Ready" ...
	I0923 14:22:10.741291 1281940 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-545656" in "kube-system" namespace has status "Ready":"True"
	I0923 14:22:10.741316 1281940 pod_ready.go:82] duration metric: took 5.192034ms for pod "kube-scheduler-old-k8s-version-545656" in "kube-system" namespace to be "Ready" ...
	I0923 14:22:10.741327 1281940 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace to be "Ready" ...
	I0923 14:22:12.747424 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:14.747927 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:16.748320 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:18.748368 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:21.248869 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:23.749115 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:25.760322 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:28.248391 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:30.248512 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:32.249521 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:34.747207 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:36.747794 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:38.748753 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:40.749376 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:43.249005 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:45.249725 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:47.747676 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:49.747946 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:51.748246 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:54.247561 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:56.248847 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:22:58.749715 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:01.247789 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:03.248073 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:05.248112 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:07.748026 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:09.748121 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:11.749162 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:14.250921 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:16.251021 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:18.747669 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:20.748191 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:22.748384 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:25.248497 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:27.748316 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:30.317936 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:32.747680 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:34.748268 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:37.247545 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:39.248029 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:41.750645 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:44.247843 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:46.748091 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:49.247241 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:51.247911 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:53.748322 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:56.248069 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:23:58.747382 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:00.748592 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:03.248488 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:05.747233 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:07.747806 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:09.747848 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:12.250027 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:14.747864 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:17.248001 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:19.750603 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:22.247221 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:24.248452 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:26.250760 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:28.748941 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:31.247412 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:33.248940 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:35.748807 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:37.749813 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:40.249419 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:42.251529 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:44.750957 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:47.247956 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:49.247994 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:51.249116 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:53.747227 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:55.748465 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:24:57.749428 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:00.261522 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:02.747912 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:04.749561 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:07.247745 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:09.248362 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:11.318253 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:13.749117 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:16.248335 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:18.748768 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:20.749089 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:23.248219 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:25.248679 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:27.747856 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:29.748532 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:32.248745 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:34.747434 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:36.748886 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:39.248015 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:41.252742 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:43.747102 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:45.747863 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:47.748697 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:50.255970 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:52.747894 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:54.748667 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:57.247853 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:25:59.749285 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:26:01.840245 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:26:04.248178 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:26:06.747307 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:26:08.748297 1281940 pod_ready.go:103] pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace has status "Ready":"False"
	I0923 14:26:10.747775 1281940 pod_ready.go:82] duration metric: took 4m0.006433561s for pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace to be "Ready" ...
	E0923 14:26:10.747802 1281940 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0923 14:26:10.747812 1281940 pod_ready.go:39] duration metric: took 5m21.291819527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 14:26:10.747827 1281940 api_server.go:52] waiting for apiserver process to appear ...
	I0923 14:26:10.747856 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0923 14:26:10.747927 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 14:26:10.798550 1281940 cri.go:89] found id: "9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe"
	I0923 14:26:10.798571 1281940 cri.go:89] found id: "be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf"
	I0923 14:26:10.798576 1281940 cri.go:89] found id: ""
	I0923 14:26:10.798583 1281940 logs.go:276] 2 containers: [9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf]
	I0923 14:26:10.798641 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.802357 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.805757 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0923 14:26:10.805887 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 14:26:10.843402 1281940 cri.go:89] found id: "b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec"
	I0923 14:26:10.843426 1281940 cri.go:89] found id: "b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef"
	I0923 14:26:10.843433 1281940 cri.go:89] found id: ""
	I0923 14:26:10.843440 1281940 logs.go:276] 2 containers: [b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef]
	I0923 14:26:10.843499 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.846999 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.850503 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0923 14:26:10.850587 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 14:26:10.887831 1281940 cri.go:89] found id: "fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e"
	I0923 14:26:10.887907 1281940 cri.go:89] found id: "a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2"
	I0923 14:26:10.887921 1281940 cri.go:89] found id: ""
	I0923 14:26:10.887929 1281940 logs.go:276] 2 containers: [fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2]
	I0923 14:26:10.887990 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.891543 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.894844 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0923 14:26:10.894918 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 14:26:10.936108 1281940 cri.go:89] found id: "850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9"
	I0923 14:26:10.936176 1281940 cri.go:89] found id: "47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76"
	I0923 14:26:10.936196 1281940 cri.go:89] found id: ""
	I0923 14:26:10.936217 1281940 logs.go:276] 2 containers: [850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9 47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76]
	I0923 14:26:10.936292 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.939970 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.943450 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0923 14:26:10.943574 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 14:26:10.989907 1281940 cri.go:89] found id: "c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2"
	I0923 14:26:10.989983 1281940 cri.go:89] found id: "887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025"
	I0923 14:26:10.990012 1281940 cri.go:89] found id: ""
	I0923 14:26:10.990036 1281940 logs.go:276] 2 containers: [c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2 887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025]
	I0923 14:26:10.990119 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.993733 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.997196 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 14:26:10.997320 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 14:26:11.037023 1281940 cri.go:89] found id: "3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f"
	I0923 14:26:11.037043 1281940 cri.go:89] found id: "84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629"
	I0923 14:26:11.037048 1281940 cri.go:89] found id: ""
	I0923 14:26:11.037056 1281940 logs.go:276] 2 containers: [3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f 84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629]
	I0923 14:26:11.037119 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.041076 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.044729 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0923 14:26:11.044851 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 14:26:11.083606 1281940 cri.go:89] found id: "88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b"
	I0923 14:26:11.083646 1281940 cri.go:89] found id: "1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff"
	I0923 14:26:11.083652 1281940 cri.go:89] found id: ""
	I0923 14:26:11.083660 1281940 logs.go:276] 2 containers: [88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b 1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff]
	I0923 14:26:11.083733 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.087596 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.091193 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0923 14:26:11.091270 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0923 14:26:11.137772 1281940 cri.go:89] found id: "d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542"
	I0923 14:26:11.137808 1281940 cri.go:89] found id: ""
	I0923 14:26:11.137817 1281940 logs.go:276] 1 containers: [d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542]
	I0923 14:26:11.137885 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.141734 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0923 14:26:11.141812 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0923 14:26:11.200210 1281940 cri.go:89] found id: "71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf"
	I0923 14:26:11.200234 1281940 cri.go:89] found id: "283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb"
	I0923 14:26:11.200239 1281940 cri.go:89] found id: ""
	I0923 14:26:11.200247 1281940 logs.go:276] 2 containers: [71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf 283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb]
	I0923 14:26:11.200324 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.203960 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.207397 1281940 logs.go:123] Gathering logs for coredns [a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2] ...
	I0923 14:26:11.207424 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2"
	I0923 14:26:11.249507 1281940 logs.go:123] Gathering logs for kube-controller-manager [3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f] ...
	I0923 14:26:11.249537 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f"
	I0923 14:26:11.308363 1281940 logs.go:123] Gathering logs for kube-apiserver [9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe] ...
	I0923 14:26:11.308397 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe"
	I0923 14:26:11.363219 1281940 logs.go:123] Gathering logs for kube-scheduler [850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9] ...
	I0923 14:26:11.363254 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9"
	I0923 14:26:11.403243 1281940 logs.go:123] Gathering logs for kube-scheduler [47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76] ...
	I0923 14:26:11.403277 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76"
	I0923 14:26:11.450357 1281940 logs.go:123] Gathering logs for kube-proxy [c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2] ...
	I0923 14:26:11.450386 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2"
	I0923 14:26:11.487548 1281940 logs.go:123] Gathering logs for containerd ...
	I0923 14:26:11.487578 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0923 14:26:11.547830 1281940 logs.go:123] Gathering logs for kubelet ...
	I0923 14:26:11.547869 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 14:26:11.598083 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.434990     660 reflector.go:138] object-"kube-system"/"coredns-token-fq9jh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-fq9jh" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.598334 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435179     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.598552 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435351     660 reflector.go:138] object-"default"/"default-token-mdzzq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-mdzzq" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.598771 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435405     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-xsdtp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-xsdtp" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.598979 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435468     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.599204 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435532     660 reflector.go:138] object-"kube-system"/"metrics-server-token-2jjpk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-2jjpk" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.599425 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435596     660 reflector.go:138] object-"kube-system"/"kindnet-token-9ghmh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-9ghmh" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.599656 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435637     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-2r2wr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-2r2wr" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.607437 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:51 old-k8s-version-545656 kubelet[660]: E0923 14:20:51.101674     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:11.607825 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:51 old-k8s-version-545656 kubelet[660]: E0923 14:20:51.779153     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.612364 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:05 old-k8s-version-545656 kubelet[660]: E0923 14:21:05.645094     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:11.614181 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:17 old-k8s-version-545656 kubelet[660]: E0923 14:21:17.635859     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.614519 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:17 old-k8s-version-545656 kubelet[660]: E0923 14:21:17.889963     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.614977 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:18 old-k8s-version-545656 kubelet[660]: E0923 14:21:18.894190     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.615805 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:21 old-k8s-version-545656 kubelet[660]: E0923 14:21:21.908450     660 pod_workers.go:191] Error syncing pod fd80c4ad-2827-4f73-9606-ebd8da196062 ("storage-provisioner_kube-system(fd80c4ad-2827-4f73-9606-ebd8da196062)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fd80c4ad-2827-4f73-9606-ebd8da196062)"
	W0923 14:26:11.616133 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:22 old-k8s-version-545656 kubelet[660]: E0923 14:21:22.447416     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.618590 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:29 old-k8s-version-545656 kubelet[660]: E0923 14:21:29.646284     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:11.619643 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:37 old-k8s-version-545656 kubelet[660]: E0923 14:21:37.960852     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.619976 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:42 old-k8s-version-545656 kubelet[660]: E0923 14:21:42.447507     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.620162 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:43 old-k8s-version-545656 kubelet[660]: E0923 14:21:43.635783     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.620490 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:54 old-k8s-version-545656 kubelet[660]: E0923 14:21:54.639254     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.620675 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:57 old-k8s-version-545656 kubelet[660]: E0923 14:21:57.635775     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.621267 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:07 old-k8s-version-545656 kubelet[660]: E0923 14:22:07.053536     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.623726 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:10 old-k8s-version-545656 kubelet[660]: E0923 14:22:10.659072     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:11.624076 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:12 old-k8s-version-545656 kubelet[660]: E0923 14:22:12.447387     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.624265 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:25 old-k8s-version-545656 kubelet[660]: E0923 14:22:25.636025     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.624593 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:26 old-k8s-version-545656 kubelet[660]: E0923 14:22:26.635306     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.624779 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:37 old-k8s-version-545656 kubelet[660]: E0923 14:22:37.635772     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.625108 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:41 old-k8s-version-545656 kubelet[660]: E0923 14:22:41.635442     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.625292 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:50 old-k8s-version-545656 kubelet[660]: E0923 14:22:50.636229     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.625882 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:53 old-k8s-version-545656 kubelet[660]: E0923 14:22:53.191243     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.626209 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:02 old-k8s-version-545656 kubelet[660]: E0923 14:23:02.447537     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.626406 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:04 old-k8s-version-545656 kubelet[660]: E0923 14:23:04.635823     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.626743 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:14 old-k8s-version-545656 kubelet[660]: E0923 14:23:14.636224     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.626928 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:19 old-k8s-version-545656 kubelet[660]: E0923 14:23:19.635919     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.627256 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:25 old-k8s-version-545656 kubelet[660]: E0923 14:23:25.635389     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.629696 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:32 old-k8s-version-545656 kubelet[660]: E0923 14:23:32.645182     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:11.630024 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:39 old-k8s-version-545656 kubelet[660]: E0923 14:23:39.635944     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.630211 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:47 old-k8s-version-545656 kubelet[660]: E0923 14:23:47.635761     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.630549 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:54 old-k8s-version-545656 kubelet[660]: E0923 14:23:54.635772     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.630735 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:59 old-k8s-version-545656 kubelet[660]: E0923 14:23:59.635871     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.631061 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:08 old-k8s-version-545656 kubelet[660]: E0923 14:24:08.636021     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.631245 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:14 old-k8s-version-545656 kubelet[660]: E0923 14:24:14.635785     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.631837 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:24 old-k8s-version-545656 kubelet[660]: E0923 14:24:24.457360     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.632024 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:26 old-k8s-version-545656 kubelet[660]: E0923 14:24:26.644873     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.632352 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:32 old-k8s-version-545656 kubelet[660]: E0923 14:24:32.452004     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.632536 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:41 old-k8s-version-545656 kubelet[660]: E0923 14:24:41.635805     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.632866 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:44 old-k8s-version-545656 kubelet[660]: E0923 14:24:44.635490     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.633059 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:54 old-k8s-version-545656 kubelet[660]: E0923 14:24:54.638624     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.633388 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:56 old-k8s-version-545656 kubelet[660]: E0923 14:24:56.639209     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.633572 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:07 old-k8s-version-545656 kubelet[660]: E0923 14:25:07.635738     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.633899 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:11 old-k8s-version-545656 kubelet[660]: E0923 14:25:11.635425     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.634084 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:18 old-k8s-version-545656 kubelet[660]: E0923 14:25:18.635950     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.634412 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:25 old-k8s-version-545656 kubelet[660]: E0923 14:25:25.635412     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.634597 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:33 old-k8s-version-545656 kubelet[660]: E0923 14:25:33.635781     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.634923 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:40 old-k8s-version-545656 kubelet[660]: E0923 14:25:40.635814     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.635109 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:45 old-k8s-version-545656 kubelet[660]: E0923 14:25:45.635813     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.635441 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: E0923 14:25:52.635815     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.635626 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:57 old-k8s-version-545656 kubelet[660]: E0923 14:25:57.635993     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.635952 1281940 logs.go:138] Found kubelet problem: Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: E0923 14:26:06.636053     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	I0923 14:26:11.635964 1281940 logs.go:123] Gathering logs for coredns [fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e] ...
	I0923 14:26:11.635978 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e"
	I0923 14:26:11.679435 1281940 logs.go:123] Gathering logs for kube-proxy [887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025] ...
	I0923 14:26:11.679467 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025"
	I0923 14:26:11.717970 1281940 logs.go:123] Gathering logs for kube-controller-manager [84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629] ...
	I0923 14:26:11.717997 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629"
	I0923 14:26:11.768581 1281940 logs.go:123] Gathering logs for kindnet [88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b] ...
	I0923 14:26:11.768618 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b"
	I0923 14:26:11.823129 1281940 logs.go:123] Gathering logs for kindnet [1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff] ...
	I0923 14:26:11.823220 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff"
	I0923 14:26:11.876382 1281940 logs.go:123] Gathering logs for kube-apiserver [be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf] ...
	I0923 14:26:11.876453 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf"
	I0923 14:26:11.961239 1281940 logs.go:123] Gathering logs for describe nodes ...
	I0923 14:26:11.961274 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 14:26:12.143442 1281940 logs.go:123] Gathering logs for etcd [b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec] ...
	I0923 14:26:12.143472 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec"
	I0923 14:26:12.184907 1281940 logs.go:123] Gathering logs for etcd [b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef] ...
	I0923 14:26:12.184938 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef"
	I0923 14:26:12.227992 1281940 logs.go:123] Gathering logs for kubernetes-dashboard [d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542] ...
	I0923 14:26:12.228021 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542"
	I0923 14:26:12.271534 1281940 logs.go:123] Gathering logs for storage-provisioner [71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf] ...
	I0923 14:26:12.271570 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf"
	I0923 14:26:12.311818 1281940 logs.go:123] Gathering logs for storage-provisioner [283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb] ...
	I0923 14:26:12.311846 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb"
	I0923 14:26:12.349085 1281940 logs.go:123] Gathering logs for container status ...
	I0923 14:26:12.349117 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 14:26:12.392122 1281940 logs.go:123] Gathering logs for dmesg ...
	I0923 14:26:12.392153 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 14:26:12.408491 1281940 out.go:358] Setting ErrFile to fd 2...
	I0923 14:26:12.408557 1281940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 14:26:12.408612 1281940 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0923 14:26:12.408625 1281940 out.go:270]   Sep 23 14:25:40 old-k8s-version-545656 kubelet[660]: E0923 14:25:40.635814     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	  Sep 23 14:25:40 old-k8s-version-545656 kubelet[660]: E0923 14:25:40.635814     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:12.408631 1281940 out.go:270]   Sep 23 14:25:45 old-k8s-version-545656 kubelet[660]: E0923 14:25:45.635813     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 14:25:45 old-k8s-version-545656 kubelet[660]: E0923 14:25:45.635813     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:12.408637 1281940 out.go:270]   Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: E0923 14:25:52.635815     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	  Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: E0923 14:25:52.635815     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:12.408665 1281940 out.go:270]   Sep 23 14:25:57 old-k8s-version-545656 kubelet[660]: E0923 14:25:57.635993     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 14:25:57 old-k8s-version-545656 kubelet[660]: E0923 14:25:57.635993     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:12.408672 1281940 out.go:270]   Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: E0923 14:26:06.636053     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	  Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: E0923 14:26:06.636053     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	I0923 14:26:12.408680 1281940 out.go:358] Setting ErrFile to fd 2...
	I0923 14:26:12.408690 1281940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:26:22.409820 1281940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 14:26:22.425513 1281940 api_server.go:72] duration metric: took 5m49.762404341s to wait for apiserver process to appear ...
	I0923 14:26:22.425544 1281940 api_server.go:88] waiting for apiserver healthz status ...
	I0923 14:26:22.425580 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0923 14:26:22.425642 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 14:26:22.496655 1281940 cri.go:89] found id: "9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe"
	I0923 14:26:22.496676 1281940 cri.go:89] found id: "be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf"
	I0923 14:26:22.496681 1281940 cri.go:89] found id: ""
	I0923 14:26:22.496688 1281940 logs.go:276] 2 containers: [9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf]
	I0923 14:26:22.496745 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.501052 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.505379 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0923 14:26:22.505450 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 14:26:22.582433 1281940 cri.go:89] found id: "b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec"
	I0923 14:26:22.582454 1281940 cri.go:89] found id: "b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef"
	I0923 14:26:22.582459 1281940 cri.go:89] found id: ""
	I0923 14:26:22.582466 1281940 logs.go:276] 2 containers: [b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef]
	I0923 14:26:22.582600 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.592957 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.599817 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0923 14:26:22.599889 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 14:26:22.702122 1281940 cri.go:89] found id: "fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e"
	I0923 14:26:22.702144 1281940 cri.go:89] found id: "a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2"
	I0923 14:26:22.702149 1281940 cri.go:89] found id: ""
	I0923 14:26:22.702156 1281940 logs.go:276] 2 containers: [fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2]
	I0923 14:26:22.702219 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.706378 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.713852 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0923 14:26:22.713925 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 14:26:22.804925 1281940 cri.go:89] found id: "850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9"
	I0923 14:26:22.804947 1281940 cri.go:89] found id: "47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76"
	I0923 14:26:22.804952 1281940 cri.go:89] found id: ""
	I0923 14:26:22.804960 1281940 logs.go:276] 2 containers: [850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9 47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76]
	I0923 14:26:22.805021 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.814066 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.820291 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0923 14:26:22.820428 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 14:26:22.902616 1281940 cri.go:89] found id: "c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2"
	I0923 14:26:22.902695 1281940 cri.go:89] found id: "887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025"
	I0923 14:26:22.902714 1281940 cri.go:89] found id: ""
	I0923 14:26:22.902738 1281940 logs.go:276] 2 containers: [c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2 887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025]
	I0923 14:26:22.902845 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.906700 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.910619 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 14:26:22.910761 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 14:26:22.996237 1281940 cri.go:89] found id: "3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f"
	I0923 14:26:22.996299 1281940 cri.go:89] found id: "84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629"
	I0923 14:26:22.996329 1281940 cri.go:89] found id: ""
	I0923 14:26:22.996350 1281940 logs.go:276] 2 containers: [3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f 84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629]
	I0923 14:26:22.996436 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.000507 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.004974 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0923 14:26:23.005070 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 14:26:23.118716 1281940 cri.go:89] found id: "88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b"
	I0923 14:26:23.118792 1281940 cri.go:89] found id: "1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff"
	I0923 14:26:23.118811 1281940 cri.go:89] found id: ""
	I0923 14:26:23.118832 1281940 logs.go:276] 2 containers: [88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b 1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff]
	I0923 14:26:23.118921 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.136584 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.144894 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0923 14:26:23.144968 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0923 14:26:23.282871 1281940 cri.go:89] found id: "71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf"
	I0923 14:26:23.282891 1281940 cri.go:89] found id: "283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb"
	I0923 14:26:23.282896 1281940 cri.go:89] found id: ""
	I0923 14:26:23.282904 1281940 logs.go:276] 2 containers: [71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf 283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb]
	I0923 14:26:23.282971 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.288843 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.297143 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0923 14:26:23.297234 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0923 14:26:23.373768 1281940 cri.go:89] found id: "d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542"
	I0923 14:26:23.373842 1281940 cri.go:89] found id: ""
	I0923 14:26:23.373864 1281940 logs.go:276] 1 containers: [d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542]
	I0923 14:26:23.373956 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.383300 1281940 logs.go:123] Gathering logs for etcd [b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec] ...
	I0923 14:26:23.383385 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec"
	I0923 14:26:23.473945 1281940 logs.go:123] Gathering logs for etcd [b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef] ...
	I0923 14:26:23.473974 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef"
	I0923 14:26:23.573792 1281940 logs.go:123] Gathering logs for kube-controller-manager [84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629] ...
	I0923 14:26:23.573820 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629"
	I0923 14:26:23.643899 1281940 logs.go:123] Gathering logs for kube-apiserver [9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe] ...
	I0923 14:26:23.643934 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe"
	I0923 14:26:23.784465 1281940 logs.go:123] Gathering logs for coredns [a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2] ...
	I0923 14:26:23.784500 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2"
	I0923 14:26:23.850037 1281940 logs.go:123] Gathering logs for kube-proxy [c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2] ...
	I0923 14:26:23.850067 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2"
	I0923 14:26:23.930274 1281940 logs.go:123] Gathering logs for kube-proxy [887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025] ...
	I0923 14:26:23.930304 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025"
	I0923 14:26:24.017381 1281940 logs.go:123] Gathering logs for kindnet [88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b] ...
	I0923 14:26:24.017416 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b"
	I0923 14:26:24.112084 1281940 logs.go:123] Gathering logs for kindnet [1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff] ...
	I0923 14:26:24.112114 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff"
	I0923 14:26:24.209793 1281940 logs.go:123] Gathering logs for storage-provisioner [71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf] ...
	I0923 14:26:24.209823 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf"
	I0923 14:26:24.277912 1281940 logs.go:123] Gathering logs for kubernetes-dashboard [d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542] ...
	I0923 14:26:24.277945 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542"
	I0923 14:26:24.352566 1281940 logs.go:123] Gathering logs for coredns [fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e] ...
	I0923 14:26:24.352595 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e"
	I0923 14:26:24.434422 1281940 logs.go:123] Gathering logs for containerd ...
	I0923 14:26:24.434455 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0923 14:26:24.535442 1281940 logs.go:123] Gathering logs for dmesg ...
	I0923 14:26:24.535516 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 14:26:24.564761 1281940 logs.go:123] Gathering logs for describe nodes ...
	I0923 14:26:24.564796 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 14:26:24.813627 1281940 logs.go:123] Gathering logs for kube-apiserver [be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf] ...
	I0923 14:26:24.813665 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf"
	I0923 14:26:24.922628 1281940 logs.go:123] Gathering logs for kube-scheduler [47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76] ...
	I0923 14:26:24.922664 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76"
	I0923 14:26:24.988073 1281940 logs.go:123] Gathering logs for kube-controller-manager [3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f] ...
	I0923 14:26:24.988154 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f"
	I0923 14:26:25.083802 1281940 logs.go:123] Gathering logs for storage-provisioner [283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb] ...
	I0923 14:26:25.083887 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb"
	I0923 14:26:25.150389 1281940 logs.go:123] Gathering logs for kubelet ...
	I0923 14:26:25.150461 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 14:26:25.221426 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.434990     660 reflector.go:138] object-"kube-system"/"coredns-token-fq9jh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-fq9jh" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.221648 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435179     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.221864 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435351     660 reflector.go:138] object-"default"/"default-token-mdzzq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-mdzzq" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.222082 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435405     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-xsdtp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-xsdtp" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.222289 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435468     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.222511 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435532     660 reflector.go:138] object-"kube-system"/"metrics-server-token-2jjpk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-2jjpk" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.222722 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435596     660 reflector.go:138] object-"kube-system"/"kindnet-token-9ghmh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-9ghmh" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.222950 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435637     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-2r2wr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-2r2wr" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.233382 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:51 old-k8s-version-545656 kubelet[660]: E0923 14:20:51.101674     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:25.233784 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:51 old-k8s-version-545656 kubelet[660]: E0923 14:20:51.779153     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.242709 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:05 old-k8s-version-545656 kubelet[660]: E0923 14:21:05.645094     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:25.244532 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:17 old-k8s-version-545656 kubelet[660]: E0923 14:21:17.635859     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.244866 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:17 old-k8s-version-545656 kubelet[660]: E0923 14:21:17.889963     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.245322 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:18 old-k8s-version-545656 kubelet[660]: E0923 14:21:18.894190     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.246094 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:21 old-k8s-version-545656 kubelet[660]: E0923 14:21:21.908450     660 pod_workers.go:191] Error syncing pod fd80c4ad-2827-4f73-9606-ebd8da196062 ("storage-provisioner_kube-system(fd80c4ad-2827-4f73-9606-ebd8da196062)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fd80c4ad-2827-4f73-9606-ebd8da196062)"
	W0923 14:26:25.246429 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:22 old-k8s-version-545656 kubelet[660]: E0923 14:21:22.447416     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.253319 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:29 old-k8s-version-545656 kubelet[660]: E0923 14:21:29.646284     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:25.254390 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:37 old-k8s-version-545656 kubelet[660]: E0923 14:21:37.960852     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.254719 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:42 old-k8s-version-545656 kubelet[660]: E0923 14:21:42.447507     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.254906 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:43 old-k8s-version-545656 kubelet[660]: E0923 14:21:43.635783     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.255234 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:54 old-k8s-version-545656 kubelet[660]: E0923 14:21:54.639254     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.255450 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:57 old-k8s-version-545656 kubelet[660]: E0923 14:21:57.635775     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.256042 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:07 old-k8s-version-545656 kubelet[660]: E0923 14:22:07.053536     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.262867 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:10 old-k8s-version-545656 kubelet[660]: E0923 14:22:10.659072     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:25.263212 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:12 old-k8s-version-545656 kubelet[660]: E0923 14:22:12.447387     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.263413 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:25 old-k8s-version-545656 kubelet[660]: E0923 14:22:25.636025     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.263740 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:26 old-k8s-version-545656 kubelet[660]: E0923 14:22:26.635306     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.263924 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:37 old-k8s-version-545656 kubelet[660]: E0923 14:22:37.635772     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.264250 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:41 old-k8s-version-545656 kubelet[660]: E0923 14:22:41.635442     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.264437 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:50 old-k8s-version-545656 kubelet[660]: E0923 14:22:50.636229     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.265025 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:53 old-k8s-version-545656 kubelet[660]: E0923 14:22:53.191243     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.265354 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:02 old-k8s-version-545656 kubelet[660]: E0923 14:23:02.447537     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.265539 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:04 old-k8s-version-545656 kubelet[660]: E0923 14:23:04.635823     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.265873 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:14 old-k8s-version-545656 kubelet[660]: E0923 14:23:14.636224     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.266058 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:19 old-k8s-version-545656 kubelet[660]: E0923 14:23:19.635919     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.266386 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:25 old-k8s-version-545656 kubelet[660]: E0923 14:23:25.635389     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.273480 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:32 old-k8s-version-545656 kubelet[660]: E0923 14:23:32.645182     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:25.273830 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:39 old-k8s-version-545656 kubelet[660]: E0923 14:23:39.635944     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.274017 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:47 old-k8s-version-545656 kubelet[660]: E0923 14:23:47.635761     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.274349 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:54 old-k8s-version-545656 kubelet[660]: E0923 14:23:54.635772     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.274538 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:59 old-k8s-version-545656 kubelet[660]: E0923 14:23:59.635871     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.274863 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:08 old-k8s-version-545656 kubelet[660]: E0923 14:24:08.636021     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.275047 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:14 old-k8s-version-545656 kubelet[660]: E0923 14:24:14.635785     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.275640 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:24 old-k8s-version-545656 kubelet[660]: E0923 14:24:24.457360     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.275827 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:26 old-k8s-version-545656 kubelet[660]: E0923 14:24:26.644873     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.276154 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:32 old-k8s-version-545656 kubelet[660]: E0923 14:24:32.452004     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.276341 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:41 old-k8s-version-545656 kubelet[660]: E0923 14:24:41.635805     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.281642 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:44 old-k8s-version-545656 kubelet[660]: E0923 14:24:44.635490     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.281846 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:54 old-k8s-version-545656 kubelet[660]: E0923 14:24:54.638624     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.282175 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:56 old-k8s-version-545656 kubelet[660]: E0923 14:24:56.639209     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.282363 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:07 old-k8s-version-545656 kubelet[660]: E0923 14:25:07.635738     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.282716 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:11 old-k8s-version-545656 kubelet[660]: E0923 14:25:11.635425     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.282901 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:18 old-k8s-version-545656 kubelet[660]: E0923 14:25:18.635950     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.283227 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:25 old-k8s-version-545656 kubelet[660]: E0923 14:25:25.635412     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.283421 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:33 old-k8s-version-545656 kubelet[660]: E0923 14:25:33.635781     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.283748 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:40 old-k8s-version-545656 kubelet[660]: E0923 14:25:40.635814     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.283932 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:45 old-k8s-version-545656 kubelet[660]: E0923 14:25:45.635813     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.284260 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: E0923 14:25:52.635815     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.284444 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:57 old-k8s-version-545656 kubelet[660]: E0923 14:25:57.635993     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.284772 1281940 logs.go:138] Found kubelet problem: Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: E0923 14:26:06.636053     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.284963 1281940 logs.go:138] Found kubelet problem: Sep 23 14:26:12 old-k8s-version-545656 kubelet[660]: E0923 14:26:12.639509     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.285288 1281940 logs.go:138] Found kubelet problem: Sep 23 14:26:18 old-k8s-version-545656 kubelet[660]: E0923 14:26:18.635458     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	I0923 14:26:25.285298 1281940 logs.go:123] Gathering logs for container status ...
	I0923 14:26:25.285312 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 14:26:25.366180 1281940 logs.go:123] Gathering logs for kube-scheduler [850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9] ...
	I0923 14:26:25.366216 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9"
	I0923 14:26:25.443835 1281940 out.go:358] Setting ErrFile to fd 2...
	I0923 14:26:25.443867 1281940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 14:26:25.443913 1281940 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0923 14:26:25.443934 1281940 out.go:270]   Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: E0923 14:25:52.635815     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	  Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: E0923 14:25:52.635815     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.443949 1281940 out.go:270]   Sep 23 14:25:57 old-k8s-version-545656 kubelet[660]: E0923 14:25:57.635993     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 14:25:57 old-k8s-version-545656 kubelet[660]: E0923 14:25:57.635993     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.443966 1281940 out.go:270]   Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: E0923 14:26:06.636053     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	  Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: E0923 14:26:06.636053     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.443973 1281940 out.go:270]   Sep 23 14:26:12 old-k8s-version-545656 kubelet[660]: E0923 14:26:12.639509     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 14:26:12 old-k8s-version-545656 kubelet[660]: E0923 14:26:12.639509     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.443985 1281940 out.go:270]   Sep 23 14:26:18 old-k8s-version-545656 kubelet[660]: E0923 14:26:18.635458     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	  Sep 23 14:26:18 old-k8s-version-545656 kubelet[660]: E0923 14:26:18.635458     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	I0923 14:26:25.443992 1281940 out.go:358] Setting ErrFile to fd 2...
	I0923 14:26:25.443998 1281940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:26:35.445508 1281940 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0923 14:26:35.460333 1281940 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0923 14:26:35.467954 1281940 out.go:201] 
	W0923 14:26:35.471847 1281940 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0923 14:26:35.471902 1281940 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0923 14:26:35.471923 1281940 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0923 14:26:35.471933 1281940 out.go:270] * 
	* 
	W0923 14:26:35.474059 1281940 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 14:26:35.478932 1281940 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-545656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-545656
helpers_test.go:235: (dbg) docker inspect old-k8s-version-545656:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "34036aa0a1f064604a44f7d3c9676e6faa24d3ff9e087a591a6a4f81affc8eb1",
	        "Created": "2024-09-23T14:17:13.304530078Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1282139,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T14:20:25.473827262Z",
	            "FinishedAt": "2024-09-23T14:20:24.550381593Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/34036aa0a1f064604a44f7d3c9676e6faa24d3ff9e087a591a6a4f81affc8eb1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/34036aa0a1f064604a44f7d3c9676e6faa24d3ff9e087a591a6a4f81affc8eb1/hostname",
	        "HostsPath": "/var/lib/docker/containers/34036aa0a1f064604a44f7d3c9676e6faa24d3ff9e087a591a6a4f81affc8eb1/hosts",
	        "LogPath": "/var/lib/docker/containers/34036aa0a1f064604a44f7d3c9676e6faa24d3ff9e087a591a6a4f81affc8eb1/34036aa0a1f064604a44f7d3c9676e6faa24d3ff9e087a591a6a4f81affc8eb1-json.log",
	        "Name": "/old-k8s-version-545656",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-545656:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-545656",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f47bcb4e65aa487121ecb69fb1715b2640be1b31febc3f69eeb5e20ef154905b-init/diff:/var/lib/docker/overlay2/1bc43114731848917669438134af7ba5a2b2d3064205845371927727bb2fadd6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f47bcb4e65aa487121ecb69fb1715b2640be1b31febc3f69eeb5e20ef154905b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f47bcb4e65aa487121ecb69fb1715b2640be1b31febc3f69eeb5e20ef154905b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f47bcb4e65aa487121ecb69fb1715b2640be1b31febc3f69eeb5e20ef154905b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-545656",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-545656/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-545656",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-545656",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-545656",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "abe2fa7c922c99c49b18d7d990642833c14a9a67950d9b8c79168da101436073",
	            "SandboxKey": "/var/run/docker/netns/abe2fa7c922c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41790"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41791"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41794"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41792"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41793"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-545656": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "00d32a6abeb9c022e062c5af2a9239b28fba6fbda81555f3c554a2c47276e1b0",
	                    "EndpointID": "04476a97f3fd8d2a7e0cf98184176c8b52260923bfa1e6a4001c547e83cd970d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-545656",
	                        "34036aa0a1f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-545656 -n old-k8s-version-545656
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-545656 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-545656 logs -n 25: (2.563294974s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-141863                           | enable-default-cni-141863 | jenkins | v1.34.0 | 23 Sep 24 14:18 UTC | 23 Sep 24 14:18 UTC |
	|         | sudo containerd config dump                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-141863                           | enable-default-cni-141863 | jenkins | v1.34.0 | 23 Sep 24 14:18 UTC |                     |
	|         | sudo systemctl status crio                             |                           |         |         |                     |                     |
	|         | --all --full --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-141863                           | enable-default-cni-141863 | jenkins | v1.34.0 | 23 Sep 24 14:18 UTC | 23 Sep 24 14:18 UTC |
	|         | sudo systemctl cat crio                                |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-141863                           | enable-default-cni-141863 | jenkins | v1.34.0 | 23 Sep 24 14:18 UTC | 23 Sep 24 14:18 UTC |
	|         | sudo find /etc/crio -type f                            |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                           |         |         |                     |                     |
	|         | \;                                                     |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-141863                           | enable-default-cni-141863 | jenkins | v1.34.0 | 23 Sep 24 14:18 UTC | 23 Sep 24 14:18 UTC |
	|         | sudo crio config                                       |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-141863                           | enable-default-cni-141863 | jenkins | v1.34.0 | 23 Sep 24 14:18 UTC | 23 Sep 24 14:18 UTC |
	| start   | -p no-preload-700594                                   | no-preload-700594         | jenkins | v1.34.0 | 23 Sep 24 14:18 UTC | 23 Sep 24 14:19 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-700594             | no-preload-700594         | jenkins | v1.34.0 | 23 Sep 24 14:19 UTC | 23 Sep 24 14:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-700594                                   | no-preload-700594         | jenkins | v1.34.0 | 23 Sep 24 14:19 UTC | 23 Sep 24 14:19 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-700594                  | no-preload-700594         | jenkins | v1.34.0 | 23 Sep 24 14:19 UTC | 23 Sep 24 14:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-700594                                   | no-preload-700594         | jenkins | v1.34.0 | 23 Sep 24 14:19 UTC | 23 Sep 24 14:24 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-545656        | old-k8s-version-545656    | jenkins | v1.34.0 | 23 Sep 24 14:20 UTC | 23 Sep 24 14:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-545656                              | old-k8s-version-545656    | jenkins | v1.34.0 | 23 Sep 24 14:20 UTC | 23 Sep 24 14:20 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-545656             | old-k8s-version-545656    | jenkins | v1.34.0 | 23 Sep 24 14:20 UTC | 23 Sep 24 14:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-545656                              | old-k8s-version-545656    | jenkins | v1.34.0 | 23 Sep 24 14:20 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| image   | no-preload-700594 image list                           | no-preload-700594         | jenkins | v1.34.0 | 23 Sep 24 14:24 UTC | 23 Sep 24 14:24 UTC |
	|         | --format=json                                          |                           |         |         |                     |                     |
	| pause   | -p no-preload-700594                                   | no-preload-700594         | jenkins | v1.34.0 | 23 Sep 24 14:24 UTC | 23 Sep 24 14:24 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| unpause | -p no-preload-700594                                   | no-preload-700594         | jenkins | v1.34.0 | 23 Sep 24 14:24 UTC | 23 Sep 24 14:24 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| delete  | -p no-preload-700594                                   | no-preload-700594         | jenkins | v1.34.0 | 23 Sep 24 14:24 UTC | 23 Sep 24 14:24 UTC |
	| delete  | -p no-preload-700594                                   | no-preload-700594         | jenkins | v1.34.0 | 23 Sep 24 14:24 UTC | 23 Sep 24 14:24 UTC |
	| start   | -p embed-certs-672015                                  | embed-certs-672015        | jenkins | v1.34.0 | 23 Sep 24 14:24 UTC | 23 Sep 24 14:25 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-672015            | embed-certs-672015        | jenkins | v1.34.0 | 23 Sep 24 14:25 UTC | 23 Sep 24 14:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-672015                                  | embed-certs-672015        | jenkins | v1.34.0 | 23 Sep 24 14:26 UTC | 23 Sep 24 14:26 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-672015                 | embed-certs-672015        | jenkins | v1.34.0 | 23 Sep 24 14:26 UTC | 23 Sep 24 14:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-672015                                  | embed-certs-672015        | jenkins | v1.34.0 | 23 Sep 24 14:26 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 14:26:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 14:26:13.448702 1291906 out.go:345] Setting OutFile to fd 1 ...
	I0923 14:26:13.448952 1291906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:26:13.448965 1291906 out.go:358] Setting ErrFile to fd 2...
	I0923 14:26:13.448971 1291906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:26:13.449275 1291906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	I0923 14:26:13.449675 1291906 out.go:352] Setting JSON to false
	I0923 14:26:13.450764 1291906 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":158920,"bootTime":1726942654,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0923 14:26:13.450838 1291906 start.go:139] virtualization:  
	I0923 14:26:13.456150 1291906 out.go:177] * [embed-certs-672015] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 14:26:13.458946 1291906 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 14:26:13.458954 1291906 notify.go:220] Checking for updates...
	I0923 14:26:13.461747 1291906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 14:26:13.465015 1291906 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 14:26:13.467656 1291906 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	I0923 14:26:13.470608 1291906 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 14:26:13.473669 1291906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 14:26:13.476895 1291906 config.go:182] Loaded profile config "embed-certs-672015": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 14:26:13.477514 1291906 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 14:26:13.509045 1291906 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 14:26:13.509217 1291906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 14:26:13.566743 1291906 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 14:26:13.556841691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 14:26:13.566861 1291906 docker.go:318] overlay module found
	I0923 14:26:13.569656 1291906 out.go:177] * Using the docker driver based on existing profile
	I0923 14:26:13.572375 1291906 start.go:297] selected driver: docker
	I0923 14:26:13.572398 1291906 start.go:901] validating driver "docker" against &{Name:embed-certs-672015 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-672015 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 14:26:13.572519 1291906 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 14:26:13.573180 1291906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 14:26:13.628426 1291906 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 14:26:13.61898008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 14:26:13.628833 1291906 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 14:26:13.628866 1291906 cni.go:84] Creating CNI manager for ""
	I0923 14:26:13.628905 1291906 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 14:26:13.628953 1291906 start.go:340] cluster config:
	{Name:embed-certs-672015 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-672015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 14:26:13.631814 1291906 out.go:177] * Starting "embed-certs-672015" primary control-plane node in "embed-certs-672015" cluster
	I0923 14:26:13.634338 1291906 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 14:26:13.637017 1291906 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 14:26:13.639568 1291906 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 14:26:13.639622 1291906 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0923 14:26:13.639634 1291906 cache.go:56] Caching tarball of preloaded images
	I0923 14:26:13.639668 1291906 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 14:26:13.639734 1291906 preload.go:172] Found /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 14:26:13.639744 1291906 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0923 14:26:13.639861 1291906 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/embed-certs-672015/config.json ...
	I0923 14:26:13.660034 1291906 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 14:26:13.660072 1291906 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 14:26:13.660101 1291906 cache.go:194] Successfully downloaded all kic artifacts
	I0923 14:26:13.660129 1291906 start.go:360] acquireMachinesLock for embed-certs-672015: {Name:mka732b029ea18139d0e66e5b264a08190767500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 14:26:13.660217 1291906 start.go:364] duration metric: took 56.45µs to acquireMachinesLock for "embed-certs-672015"
	I0923 14:26:13.660241 1291906 start.go:96] Skipping create...Using existing machine configuration
	I0923 14:26:13.660252 1291906 fix.go:54] fixHost starting: 
	I0923 14:26:13.660626 1291906 cli_runner.go:164] Run: docker container inspect embed-certs-672015 --format={{.State.Status}}
	I0923 14:26:13.680839 1291906 fix.go:112] recreateIfNeeded on embed-certs-672015: state=Stopped err=<nil>
	W0923 14:26:13.680867 1291906 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 14:26:13.683909 1291906 out.go:177] * Restarting existing docker container for "embed-certs-672015" ...
	I0923 14:26:10.747775 1281940 pod_ready.go:82] duration metric: took 4m0.006433561s for pod "metrics-server-9975d5f86-vpnpr" in "kube-system" namespace to be "Ready" ...
	E0923 14:26:10.747802 1281940 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0923 14:26:10.747812 1281940 pod_ready.go:39] duration metric: took 5m21.291819527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 14:26:10.747827 1281940 api_server.go:52] waiting for apiserver process to appear ...
	I0923 14:26:10.747856 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0923 14:26:10.747927 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 14:26:10.798550 1281940 cri.go:89] found id: "9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe"
	I0923 14:26:10.798571 1281940 cri.go:89] found id: "be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf"
	I0923 14:26:10.798576 1281940 cri.go:89] found id: ""
	I0923 14:26:10.798583 1281940 logs.go:276] 2 containers: [9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf]
	I0923 14:26:10.798641 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.802357 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.805757 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0923 14:26:10.805887 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 14:26:10.843402 1281940 cri.go:89] found id: "b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec"
	I0923 14:26:10.843426 1281940 cri.go:89] found id: "b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef"
	I0923 14:26:10.843433 1281940 cri.go:89] found id: ""
	I0923 14:26:10.843440 1281940 logs.go:276] 2 containers: [b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef]
	I0923 14:26:10.843499 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.846999 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.850503 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0923 14:26:10.850587 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 14:26:10.887831 1281940 cri.go:89] found id: "fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e"
	I0923 14:26:10.887907 1281940 cri.go:89] found id: "a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2"
	I0923 14:26:10.887921 1281940 cri.go:89] found id: ""
	I0923 14:26:10.887929 1281940 logs.go:276] 2 containers: [fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2]
	I0923 14:26:10.887990 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.891543 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.894844 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0923 14:26:10.894918 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 14:26:10.936108 1281940 cri.go:89] found id: "850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9"
	I0923 14:26:10.936176 1281940 cri.go:89] found id: "47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76"
	I0923 14:26:10.936196 1281940 cri.go:89] found id: ""
	I0923 14:26:10.936217 1281940 logs.go:276] 2 containers: [850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9 47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76]
	I0923 14:26:10.936292 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.939970 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.943450 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0923 14:26:10.943574 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 14:26:10.989907 1281940 cri.go:89] found id: "c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2"
	I0923 14:26:10.989983 1281940 cri.go:89] found id: "887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025"
	I0923 14:26:10.990012 1281940 cri.go:89] found id: ""
	I0923 14:26:10.990036 1281940 logs.go:276] 2 containers: [c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2 887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025]
	I0923 14:26:10.990119 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.993733 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:10.997196 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 14:26:10.997320 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 14:26:11.037023 1281940 cri.go:89] found id: "3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f"
	I0923 14:26:11.037043 1281940 cri.go:89] found id: "84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629"
	I0923 14:26:11.037048 1281940 cri.go:89] found id: ""
	I0923 14:26:11.037056 1281940 logs.go:276] 2 containers: [3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f 84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629]
	I0923 14:26:11.037119 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.041076 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.044729 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0923 14:26:11.044851 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 14:26:11.083606 1281940 cri.go:89] found id: "88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b"
	I0923 14:26:11.083646 1281940 cri.go:89] found id: "1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff"
	I0923 14:26:11.083652 1281940 cri.go:89] found id: ""
	I0923 14:26:11.083660 1281940 logs.go:276] 2 containers: [88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b 1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff]
	I0923 14:26:11.083733 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.087596 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.091193 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0923 14:26:11.091270 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0923 14:26:11.137772 1281940 cri.go:89] found id: "d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542"
	I0923 14:26:11.137808 1281940 cri.go:89] found id: ""
	I0923 14:26:11.137817 1281940 logs.go:276] 1 containers: [d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542]
	I0923 14:26:11.137885 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.141734 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0923 14:26:11.141812 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0923 14:26:11.200210 1281940 cri.go:89] found id: "71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf"
	I0923 14:26:11.200234 1281940 cri.go:89] found id: "283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb"
	I0923 14:26:11.200239 1281940 cri.go:89] found id: ""
	I0923 14:26:11.200247 1281940 logs.go:276] 2 containers: [71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf 283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb]
	I0923 14:26:11.200324 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.203960 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:11.207397 1281940 logs.go:123] Gathering logs for coredns [a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2] ...
	I0923 14:26:11.207424 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2"
	I0923 14:26:11.249507 1281940 logs.go:123] Gathering logs for kube-controller-manager [3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f] ...
	I0923 14:26:11.249537 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f"
	I0923 14:26:11.308363 1281940 logs.go:123] Gathering logs for kube-apiserver [9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe] ...
	I0923 14:26:11.308397 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe"
	I0923 14:26:11.363219 1281940 logs.go:123] Gathering logs for kube-scheduler [850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9] ...
	I0923 14:26:11.363254 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9"
	I0923 14:26:11.403243 1281940 logs.go:123] Gathering logs for kube-scheduler [47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76] ...
	I0923 14:26:11.403277 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76"
	I0923 14:26:11.450357 1281940 logs.go:123] Gathering logs for kube-proxy [c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2] ...
	I0923 14:26:11.450386 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2"
	I0923 14:26:11.487548 1281940 logs.go:123] Gathering logs for containerd ...
	I0923 14:26:11.487578 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0923 14:26:11.547830 1281940 logs.go:123] Gathering logs for kubelet ...
	I0923 14:26:11.547869 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 14:26:11.598083 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.434990     660 reflector.go:138] object-"kube-system"/"coredns-token-fq9jh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-fq9jh" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.598334 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435179     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.598552 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435351     660 reflector.go:138] object-"default"/"default-token-mdzzq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-mdzzq" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.598771 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435405     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-xsdtp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-xsdtp" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.598979 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435468     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.599204 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435532     660 reflector.go:138] object-"kube-system"/"metrics-server-token-2jjpk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-2jjpk" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.599425 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435596     660 reflector.go:138] object-"kube-system"/"kindnet-token-9ghmh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-9ghmh" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.599656 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435637     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-2r2wr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-2r2wr" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:11.607437 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:51 old-k8s-version-545656 kubelet[660]: E0923 14:20:51.101674     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:11.607825 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:51 old-k8s-version-545656 kubelet[660]: E0923 14:20:51.779153     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.612364 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:05 old-k8s-version-545656 kubelet[660]: E0923 14:21:05.645094     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:11.614181 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:17 old-k8s-version-545656 kubelet[660]: E0923 14:21:17.635859     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.614519 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:17 old-k8s-version-545656 kubelet[660]: E0923 14:21:17.889963     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.614977 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:18 old-k8s-version-545656 kubelet[660]: E0923 14:21:18.894190     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.615805 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:21 old-k8s-version-545656 kubelet[660]: E0923 14:21:21.908450     660 pod_workers.go:191] Error syncing pod fd80c4ad-2827-4f73-9606-ebd8da196062 ("storage-provisioner_kube-system(fd80c4ad-2827-4f73-9606-ebd8da196062)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fd80c4ad-2827-4f73-9606-ebd8da196062)"
	W0923 14:26:11.616133 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:22 old-k8s-version-545656 kubelet[660]: E0923 14:21:22.447416     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.618590 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:29 old-k8s-version-545656 kubelet[660]: E0923 14:21:29.646284     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:11.619643 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:37 old-k8s-version-545656 kubelet[660]: E0923 14:21:37.960852     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.619976 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:42 old-k8s-version-545656 kubelet[660]: E0923 14:21:42.447507     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.620162 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:43 old-k8s-version-545656 kubelet[660]: E0923 14:21:43.635783     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.620490 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:54 old-k8s-version-545656 kubelet[660]: E0923 14:21:54.639254     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.620675 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:57 old-k8s-version-545656 kubelet[660]: E0923 14:21:57.635775     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.621267 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:07 old-k8s-version-545656 kubelet[660]: E0923 14:22:07.053536     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.623726 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:10 old-k8s-version-545656 kubelet[660]: E0923 14:22:10.659072     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:11.624076 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:12 old-k8s-version-545656 kubelet[660]: E0923 14:22:12.447387     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.624265 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:25 old-k8s-version-545656 kubelet[660]: E0923 14:22:25.636025     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.624593 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:26 old-k8s-version-545656 kubelet[660]: E0923 14:22:26.635306     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.624779 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:37 old-k8s-version-545656 kubelet[660]: E0923 14:22:37.635772     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.625108 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:41 old-k8s-version-545656 kubelet[660]: E0923 14:22:41.635442     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.625292 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:50 old-k8s-version-545656 kubelet[660]: E0923 14:22:50.636229     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.625882 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:53 old-k8s-version-545656 kubelet[660]: E0923 14:22:53.191243     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.626209 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:02 old-k8s-version-545656 kubelet[660]: E0923 14:23:02.447537     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.626406 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:04 old-k8s-version-545656 kubelet[660]: E0923 14:23:04.635823     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.626743 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:14 old-k8s-version-545656 kubelet[660]: E0923 14:23:14.636224     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.626928 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:19 old-k8s-version-545656 kubelet[660]: E0923 14:23:19.635919     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.627256 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:25 old-k8s-version-545656 kubelet[660]: E0923 14:23:25.635389     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.629696 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:32 old-k8s-version-545656 kubelet[660]: E0923 14:23:32.645182     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:11.630024 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:39 old-k8s-version-545656 kubelet[660]: E0923 14:23:39.635944     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.630211 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:47 old-k8s-version-545656 kubelet[660]: E0923 14:23:47.635761     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.630549 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:54 old-k8s-version-545656 kubelet[660]: E0923 14:23:54.635772     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.630735 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:59 old-k8s-version-545656 kubelet[660]: E0923 14:23:59.635871     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.631061 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:08 old-k8s-version-545656 kubelet[660]: E0923 14:24:08.636021     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.631245 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:14 old-k8s-version-545656 kubelet[660]: E0923 14:24:14.635785     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.631837 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:24 old-k8s-version-545656 kubelet[660]: E0923 14:24:24.457360     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.632024 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:26 old-k8s-version-545656 kubelet[660]: E0923 14:24:26.644873     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.632352 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:32 old-k8s-version-545656 kubelet[660]: E0923 14:24:32.452004     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.632536 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:41 old-k8s-version-545656 kubelet[660]: E0923 14:24:41.635805     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.632866 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:44 old-k8s-version-545656 kubelet[660]: E0923 14:24:44.635490     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.633059 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:54 old-k8s-version-545656 kubelet[660]: E0923 14:24:54.638624     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.633388 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:56 old-k8s-version-545656 kubelet[660]: E0923 14:24:56.639209     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.633572 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:07 old-k8s-version-545656 kubelet[660]: E0923 14:25:07.635738     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.633899 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:11 old-k8s-version-545656 kubelet[660]: E0923 14:25:11.635425     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.634084 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:18 old-k8s-version-545656 kubelet[660]: E0923 14:25:18.635950     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.634412 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:25 old-k8s-version-545656 kubelet[660]: E0923 14:25:25.635412     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.634597 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:33 old-k8s-version-545656 kubelet[660]: E0923 14:25:33.635781     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.634923 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:40 old-k8s-version-545656 kubelet[660]: E0923 14:25:40.635814     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.635109 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:45 old-k8s-version-545656 kubelet[660]: E0923 14:25:45.635813     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.635441 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: E0923 14:25:52.635815     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:11.635626 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:57 old-k8s-version-545656 kubelet[660]: E0923 14:25:57.635993     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:11.635952 1281940 logs.go:138] Found kubelet problem: Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: E0923 14:26:06.636053     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	I0923 14:26:11.635964 1281940 logs.go:123] Gathering logs for coredns [fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e] ...
	I0923 14:26:11.635978 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e"
	I0923 14:26:11.679435 1281940 logs.go:123] Gathering logs for kube-proxy [887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025] ...
	I0923 14:26:11.679467 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025"
	I0923 14:26:11.717970 1281940 logs.go:123] Gathering logs for kube-controller-manager [84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629] ...
	I0923 14:26:11.717997 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629"
	I0923 14:26:11.768581 1281940 logs.go:123] Gathering logs for kindnet [88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b] ...
	I0923 14:26:11.768618 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b"
	I0923 14:26:11.823129 1281940 logs.go:123] Gathering logs for kindnet [1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff] ...
	I0923 14:26:11.823220 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff"
	I0923 14:26:11.876382 1281940 logs.go:123] Gathering logs for kube-apiserver [be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf] ...
	I0923 14:26:11.876453 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf"
	I0923 14:26:11.961239 1281940 logs.go:123] Gathering logs for describe nodes ...
	I0923 14:26:11.961274 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 14:26:12.143442 1281940 logs.go:123] Gathering logs for etcd [b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec] ...
	I0923 14:26:12.143472 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec"
	I0923 14:26:12.184907 1281940 logs.go:123] Gathering logs for etcd [b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef] ...
	I0923 14:26:12.184938 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef"
	I0923 14:26:12.227992 1281940 logs.go:123] Gathering logs for kubernetes-dashboard [d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542] ...
	I0923 14:26:12.228021 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542"
	I0923 14:26:12.271534 1281940 logs.go:123] Gathering logs for storage-provisioner [71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf] ...
	I0923 14:26:12.271570 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf"
	I0923 14:26:12.311818 1281940 logs.go:123] Gathering logs for storage-provisioner [283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb] ...
	I0923 14:26:12.311846 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb"
	I0923 14:26:12.349085 1281940 logs.go:123] Gathering logs for container status ...
	I0923 14:26:12.349117 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 14:26:12.392122 1281940 logs.go:123] Gathering logs for dmesg ...
	I0923 14:26:12.392153 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 14:26:12.408491 1281940 out.go:358] Setting ErrFile to fd 2...
	I0923 14:26:12.408557 1281940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 14:26:12.408612 1281940 out.go:270] X Problems detected in kubelet:
	W0923 14:26:12.408625 1281940 out.go:270]   Sep 23 14:25:40 old-k8s-version-545656 kubelet[660]: E0923 14:25:40.635814     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:12.408631 1281940 out.go:270]   Sep 23 14:25:45 old-k8s-version-545656 kubelet[660]: E0923 14:25:45.635813     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:12.408637 1281940 out.go:270]   Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: E0923 14:25:52.635815     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:12.408665 1281940 out.go:270]   Sep 23 14:25:57 old-k8s-version-545656 kubelet[660]: E0923 14:25:57.635993     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:12.408672 1281940 out.go:270]   Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: E0923 14:26:06.636053     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	I0923 14:26:12.408680 1281940 out.go:358] Setting ErrFile to fd 2...
	I0923 14:26:12.408690 1281940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:26:13.686498 1291906 cli_runner.go:164] Run: docker start embed-certs-672015
	I0923 14:26:14.015982 1291906 cli_runner.go:164] Run: docker container inspect embed-certs-672015 --format={{.State.Status}}
	I0923 14:26:14.039964 1291906 kic.go:430] container "embed-certs-672015" state is running.
	I0923 14:26:14.040377 1291906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672015
	I0923 14:26:14.068522 1291906 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/embed-certs-672015/config.json ...
	I0923 14:26:14.069183 1291906 machine.go:93] provisionDockerMachine start ...
	I0923 14:26:14.069307 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:14.091402 1291906 main.go:141] libmachine: Using SSH client type: native
	I0923 14:26:14.091786 1291906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41800 <nil> <nil>}
	I0923 14:26:14.091806 1291906 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 14:26:14.092573 1291906 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0923 14:26:17.227044 1291906 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-672015
	
	I0923 14:26:17.227072 1291906 ubuntu.go:169] provisioning hostname "embed-certs-672015"
	I0923 14:26:17.227139 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:17.245482 1291906 main.go:141] libmachine: Using SSH client type: native
	I0923 14:26:17.245731 1291906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41800 <nil> <nil>}
	I0923 14:26:17.245749 1291906 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672015 && echo "embed-certs-672015" | sudo tee /etc/hostname
	I0923 14:26:17.392391 1291906 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-672015
	
	I0923 14:26:17.392479 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:17.411845 1291906 main.go:141] libmachine: Using SSH client type: native
	I0923 14:26:17.412088 1291906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41800 <nil> <nil>}
	I0923 14:26:17.412107 1291906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672015/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 14:26:17.551447 1291906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 14:26:17.551472 1291906 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19690-1028234/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-1028234/.minikube}
	I0923 14:26:17.551518 1291906 ubuntu.go:177] setting up certificates
	I0923 14:26:17.551531 1291906 provision.go:84] configureAuth start
	I0923 14:26:17.551596 1291906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672015
	I0923 14:26:17.568032 1291906 provision.go:143] copyHostCerts
	I0923 14:26:17.568097 1291906 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-1028234/.minikube/cert.pem, removing ...
	I0923 14:26:17.568114 1291906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-1028234/.minikube/cert.pem
	I0923 14:26:17.568189 1291906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-1028234/.minikube/cert.pem (1123 bytes)
	I0923 14:26:17.568293 1291906 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-1028234/.minikube/key.pem, removing ...
	I0923 14:26:17.568304 1291906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-1028234/.minikube/key.pem
	I0923 14:26:17.568334 1291906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-1028234/.minikube/key.pem (1675 bytes)
	I0923 14:26:17.568393 1291906 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.pem, removing ...
	I0923 14:26:17.568403 1291906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.pem
	I0923 14:26:17.568437 1291906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.pem (1082 bytes)
	I0923 14:26:17.568488 1291906 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672015 san=[127.0.0.1 192.168.76.2 embed-certs-672015 localhost minikube]
	I0923 14:26:17.946704 1291906 provision.go:177] copyRemoteCerts
	I0923 14:26:17.946781 1291906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 14:26:17.946828 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:17.975069 1291906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41800 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/embed-certs-672015/id_rsa Username:docker}
	I0923 14:26:18.074152 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0923 14:26:18.103813 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 14:26:18.129765 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 14:26:18.157755 1291906 provision.go:87] duration metric: took 606.209257ms to configureAuth
	I0923 14:26:18.157781 1291906 ubuntu.go:193] setting minikube options for container-runtime
	I0923 14:26:18.157990 1291906 config.go:182] Loaded profile config "embed-certs-672015": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 14:26:18.157998 1291906 machine.go:96] duration metric: took 4.088794092s to provisionDockerMachine
	I0923 14:26:18.158006 1291906 start.go:293] postStartSetup for "embed-certs-672015" (driver="docker")
	I0923 14:26:18.158016 1291906 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 14:26:18.158071 1291906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 14:26:18.158123 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:18.187790 1291906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41800 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/embed-certs-672015/id_rsa Username:docker}
	I0923 14:26:18.285907 1291906 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 14:26:18.289336 1291906 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 14:26:18.289374 1291906 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 14:26:18.289385 1291906 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 14:26:18.289393 1291906 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 14:26:18.289409 1291906 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-1028234/.minikube/addons for local assets ...
	I0923 14:26:18.289468 1291906 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-1028234/.minikube/files for local assets ...
	I0923 14:26:18.289557 1291906 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-1028234/.minikube/files/etc/ssl/certs/10336162.pem -> 10336162.pem in /etc/ssl/certs
	I0923 14:26:18.289667 1291906 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 14:26:18.298350 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/files/etc/ssl/certs/10336162.pem --> /etc/ssl/certs/10336162.pem (1708 bytes)
	I0923 14:26:18.323589 1291906 start.go:296] duration metric: took 165.568185ms for postStartSetup
	I0923 14:26:18.323676 1291906 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 14:26:18.323720 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:18.340962 1291906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41800 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/embed-certs-672015/id_rsa Username:docker}
	I0923 14:26:18.432969 1291906 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 14:26:18.438003 1291906 fix.go:56] duration metric: took 4.777742302s for fixHost
	I0923 14:26:18.438029 1291906 start.go:83] releasing machines lock for "embed-certs-672015", held for 4.777799129s
	I0923 14:26:18.438101 1291906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672015
	I0923 14:26:18.456114 1291906 ssh_runner.go:195] Run: cat /version.json
	I0923 14:26:18.456179 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:18.456517 1291906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 14:26:18.456601 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:18.489318 1291906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41800 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/embed-certs-672015/id_rsa Username:docker}
	I0923 14:26:18.492550 1291906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41800 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/embed-certs-672015/id_rsa Username:docker}
	I0923 14:26:18.591376 1291906 ssh_runner.go:195] Run: systemctl --version
	I0923 14:26:18.719888 1291906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 14:26:18.724444 1291906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 14:26:18.743181 1291906 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 14:26:18.743263 1291906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 14:26:18.752476 1291906 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 14:26:18.752521 1291906 start.go:495] detecting cgroup driver to use...
	I0923 14:26:18.752562 1291906 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 14:26:18.752628 1291906 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 14:26:18.767763 1291906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 14:26:18.788568 1291906 docker.go:217] disabling cri-docker service (if available) ...
	I0923 14:26:18.788639 1291906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 14:26:18.802842 1291906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 14:26:18.822635 1291906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 14:26:18.921581 1291906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 14:26:19.021728 1291906 docker.go:233] disabling docker service ...
	I0923 14:26:19.021892 1291906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 14:26:19.035713 1291906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 14:26:19.049695 1291906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 14:26:19.151786 1291906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 14:26:19.234688 1291906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 14:26:19.247650 1291906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 14:26:19.265309 1291906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 14:26:19.275589 1291906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 14:26:19.285659 1291906 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 14:26:19.285736 1291906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 14:26:19.295920 1291906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 14:26:19.306587 1291906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 14:26:19.316738 1291906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 14:26:19.327609 1291906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 14:26:19.337625 1291906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 14:26:19.349150 1291906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 14:26:19.360811 1291906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 14:26:19.371741 1291906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 14:26:19.382446 1291906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 14:26:19.391880 1291906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 14:26:19.491445 1291906 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 14:26:19.644785 1291906 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0923 14:26:19.644891 1291906 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0923 14:26:19.655843 1291906 start.go:563] Will wait 60s for crictl version
	I0923 14:26:19.655916 1291906 ssh_runner.go:195] Run: which crictl
	I0923 14:26:19.660053 1291906 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 14:26:19.705255 1291906 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0923 14:26:19.705348 1291906 ssh_runner.go:195] Run: containerd --version
	I0923 14:26:19.730445 1291906 ssh_runner.go:195] Run: containerd --version
	I0923 14:26:19.757212 1291906 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0923 14:26:19.760047 1291906 cli_runner.go:164] Run: docker network inspect embed-certs-672015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 14:26:19.787474 1291906 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0923 14:26:19.791701 1291906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 14:26:19.805131 1291906 kubeadm.go:883] updating cluster {Name:embed-certs-672015 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-672015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 14:26:19.805259 1291906 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 14:26:19.805334 1291906 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 14:26:19.846075 1291906 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 14:26:19.846103 1291906 containerd.go:534] Images already preloaded, skipping extraction
	I0923 14:26:19.846168 1291906 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 14:26:19.884455 1291906 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 14:26:19.884477 1291906 cache_images.go:84] Images are preloaded, skipping loading
	I0923 14:26:19.884485 1291906 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.1 containerd true true} ...
	I0923 14:26:19.884605 1291906 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-672015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-672015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 14:26:19.884685 1291906 ssh_runner.go:195] Run: sudo crictl info
	I0923 14:26:19.923107 1291906 cni.go:84] Creating CNI manager for ""
	I0923 14:26:19.923178 1291906 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 14:26:19.923205 1291906 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 14:26:19.923264 1291906 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672015 NodeName:embed-certs-672015 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 14:26:19.923479 1291906 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-672015"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 14:26:19.923575 1291906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 14:26:19.933565 1291906 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 14:26:19.933653 1291906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 14:26:19.943602 1291906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0923 14:26:19.962916 1291906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 14:26:19.989773 1291906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0923 14:26:20.013900 1291906 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0923 14:26:20.018897 1291906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 14:26:20.032825 1291906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 14:26:20.153459 1291906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 14:26:20.171843 1291906 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/embed-certs-672015 for IP: 192.168.76.2
	I0923 14:26:20.171863 1291906 certs.go:194] generating shared ca certs ...
	I0923 14:26:20.171879 1291906 certs.go:226] acquiring lock for ca certs: {Name:mk03d32b578b2438d161be017440f804f69b681b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 14:26:20.172029 1291906 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.key
	I0923 14:26:20.172086 1291906 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.key
	I0923 14:26:20.172093 1291906 certs.go:256] generating profile certs ...
	I0923 14:26:20.172178 1291906 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/embed-certs-672015/client.key
	I0923 14:26:20.172238 1291906 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/embed-certs-672015/apiserver.key.cf1a5ae6
	I0923 14:26:20.172282 1291906 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/embed-certs-672015/proxy-client.key
	I0923 14:26:20.172397 1291906 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/1033616.pem (1338 bytes)
	W0923 14:26:20.172426 1291906 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/1033616_empty.pem, impossibly tiny 0 bytes
	I0923 14:26:20.172433 1291906 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 14:26:20.172457 1291906 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/ca.pem (1082 bytes)
	I0923 14:26:20.172479 1291906 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/cert.pem (1123 bytes)
	I0923 14:26:20.172501 1291906 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/key.pem (1675 bytes)
	I0923 14:26:20.172543 1291906 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1028234/.minikube/files/etc/ssl/certs/10336162.pem (1708 bytes)
	I0923 14:26:20.173249 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 14:26:20.213616 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 14:26:20.246437 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 14:26:20.280819 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 14:26:20.320436 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/embed-certs-672015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0923 14:26:20.361881 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/embed-certs-672015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 14:26:20.390980 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/embed-certs-672015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 14:26:20.422804 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/embed-certs-672015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 14:26:20.458159 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 14:26:20.491943 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/certs/1033616.pem --> /usr/share/ca-certificates/1033616.pem (1338 bytes)
	I0923 14:26:20.521627 1291906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-1028234/.minikube/files/etc/ssl/certs/10336162.pem --> /usr/share/ca-certificates/10336162.pem (1708 bytes)
	I0923 14:26:20.552902 1291906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 14:26:20.574895 1291906 ssh_runner.go:195] Run: openssl version
	I0923 14:26:20.581336 1291906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 14:26:20.593174 1291906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 14:26:20.597648 1291906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 13:23 /usr/share/ca-certificates/minikubeCA.pem
	I0923 14:26:20.597722 1291906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 14:26:20.605237 1291906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 14:26:20.614822 1291906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1033616.pem && ln -fs /usr/share/ca-certificates/1033616.pem /etc/ssl/certs/1033616.pem"
	I0923 14:26:20.624834 1291906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1033616.pem
	I0923 14:26:20.628806 1291906 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 13:34 /usr/share/ca-certificates/1033616.pem
	I0923 14:26:20.628876 1291906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1033616.pem
	I0923 14:26:20.637164 1291906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1033616.pem /etc/ssl/certs/51391683.0"
	I0923 14:26:20.648243 1291906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10336162.pem && ln -fs /usr/share/ca-certificates/10336162.pem /etc/ssl/certs/10336162.pem"
	I0923 14:26:20.660642 1291906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10336162.pem
	I0923 14:26:20.664500 1291906 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 13:34 /usr/share/ca-certificates/10336162.pem
	I0923 14:26:20.664597 1291906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10336162.pem
	I0923 14:26:20.672182 1291906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10336162.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 14:26:20.682400 1291906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 14:26:20.686360 1291906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 14:26:20.694668 1291906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 14:26:20.702249 1291906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 14:26:20.709471 1291906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 14:26:20.717341 1291906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 14:26:20.725159 1291906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 14:26:20.732684 1291906 kubeadm.go:392] StartCluster: {Name:embed-certs-672015 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-672015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 14:26:20.732863 1291906 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0923 14:26:20.732959 1291906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 14:26:20.801306 1291906 cri.go:89] found id: "40e2ddfc935f42e9476e79c3d1c36dbc591daa148888bf3ab39c3a9eb7ab7a7c"
	I0923 14:26:20.801345 1291906 cri.go:89] found id: "bb86d25eccb2984fabcb9cfba969c387b5943172ab7af0a2b3a51da042dd2869"
	I0923 14:26:20.801351 1291906 cri.go:89] found id: "00f17bb7caeed17a2ed9c0cd5d491ae1e457a5662740978d1deeb40f90c496a7"
	I0923 14:26:20.801355 1291906 cri.go:89] found id: "d5b2540b4a196cb44d21945799104043a184e6e2be7f13351cda88308ea2b82c"
	I0923 14:26:20.801359 1291906 cri.go:89] found id: "9cf32130e2eb416ac5c8aa9cb713d6ff41bccdd8bf698cc138992c20866531d8"
	I0923 14:26:20.801363 1291906 cri.go:89] found id: "84fdc46cea5d43b32ed7b84193d9bef458f849bda7298e3f21909ea1b5e22cd4"
	I0923 14:26:20.801393 1291906 cri.go:89] found id: "3193bfb6c7a5385aafcd74dadefc6d0c6d3daae0e3ccffb20675cb078615e26a"
	I0923 14:26:20.801400 1291906 cri.go:89] found id: "537a47dd60b6bc51c8e2f180b76fc015a9a5533577e7127fe213f991a383c241"
	I0923 14:26:20.801404 1291906 cri.go:89] found id: ""
	I0923 14:26:20.801517 1291906 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0923 14:26:20.828308 1291906 cri.go:116] JSON = null
	W0923 14:26:20.828412 1291906 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0923 14:26:20.828505 1291906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 14:26:20.842646 1291906 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 14:26:20.842722 1291906 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 14:26:20.842809 1291906 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 14:26:20.856472 1291906 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 14:26:20.857187 1291906 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-672015" does not appear in /home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 14:26:20.857513 1291906 kubeconfig.go:62] /home/jenkins/minikube-integration/19690-1028234/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-672015" cluster setting kubeconfig missing "embed-certs-672015" context setting]
	I0923 14:26:20.858035 1291906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/kubeconfig: {Name:mkd806df25aca780e43239d5b6c8b09e764ab897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 14:26:20.859677 1291906 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 14:26:20.878564 1291906 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0923 14:26:20.878644 1291906 kubeadm.go:597] duration metric: took 35.90289ms to restartPrimaryControlPlane
	I0923 14:26:20.878669 1291906 kubeadm.go:394] duration metric: took 145.996003ms to StartCluster
	I0923 14:26:20.878713 1291906 settings.go:142] acquiring lock: {Name:mk31b92312dde44fbd825c77a82e5dececb66fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 14:26:20.878798 1291906 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 14:26:20.880125 1291906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1028234/kubeconfig: {Name:mkd806df25aca780e43239d5b6c8b09e764ab897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 14:26:20.880417 1291906 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 14:26:20.880922 1291906 config.go:182] Loaded profile config "embed-certs-672015": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 14:26:20.880891 1291906 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 14:26:20.881001 1291906 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-672015"
	I0923 14:26:20.881034 1291906 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-672015"
	W0923 14:26:20.881041 1291906 addons.go:243] addon storage-provisioner should already be in state true
	I0923 14:26:20.881072 1291906 addons.go:69] Setting default-storageclass=true in profile "embed-certs-672015"
	I0923 14:26:20.881100 1291906 host.go:66] Checking if "embed-certs-672015" exists ...
	I0923 14:26:20.881107 1291906 addons.go:69] Setting dashboard=true in profile "embed-certs-672015"
	I0923 14:26:20.881172 1291906 addons.go:234] Setting addon dashboard=true in "embed-certs-672015"
	W0923 14:26:20.881192 1291906 addons.go:243] addon dashboard should already be in state true
	I0923 14:26:20.881255 1291906 host.go:66] Checking if "embed-certs-672015" exists ...
	I0923 14:26:20.881805 1291906 cli_runner.go:164] Run: docker container inspect embed-certs-672015 --format={{.State.Status}}
	I0923 14:26:20.881983 1291906 cli_runner.go:164] Run: docker container inspect embed-certs-672015 --format={{.State.Status}}
	I0923 14:26:20.882549 1291906 addons.go:69] Setting metrics-server=true in profile "embed-certs-672015"
	I0923 14:26:20.882592 1291906 addons.go:234] Setting addon metrics-server=true in "embed-certs-672015"
	W0923 14:26:20.882615 1291906 addons.go:243] addon metrics-server should already be in state true
	I0923 14:26:20.882672 1291906 host.go:66] Checking if "embed-certs-672015" exists ...
	I0923 14:26:20.883416 1291906 cli_runner.go:164] Run: docker container inspect embed-certs-672015 --format={{.State.Status}}
	I0923 14:26:20.881100 1291906 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-672015"
	I0923 14:26:20.892118 1291906 cli_runner.go:164] Run: docker container inspect embed-certs-672015 --format={{.State.Status}}
	I0923 14:26:20.893492 1291906 out.go:177] * Verifying Kubernetes components...
	I0923 14:26:20.901451 1291906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 14:26:20.957504 1291906 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 14:26:20.957996 1291906 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0923 14:26:20.961653 1291906 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 14:26:20.961684 1291906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 14:26:20.961764 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:20.966181 1291906 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0923 14:26:20.970113 1291906 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0923 14:26:20.970138 1291906 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0923 14:26:20.970210 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:20.983859 1291906 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0923 14:26:20.989122 1291906 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 14:26:20.989161 1291906 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 14:26:20.989231 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:20.998880 1291906 addons.go:234] Setting addon default-storageclass=true in "embed-certs-672015"
	W0923 14:26:20.998913 1291906 addons.go:243] addon default-storageclass should already be in state true
	I0923 14:26:20.998945 1291906 host.go:66] Checking if "embed-certs-672015" exists ...
	I0923 14:26:21.003132 1291906 cli_runner.go:164] Run: docker container inspect embed-certs-672015 --format={{.State.Status}}
	I0923 14:26:21.043179 1291906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41800 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/embed-certs-672015/id_rsa Username:docker}
	I0923 14:26:21.050667 1291906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41800 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/embed-certs-672015/id_rsa Username:docker}
	I0923 14:26:21.063532 1291906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41800 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/embed-certs-672015/id_rsa Username:docker}
	I0923 14:26:21.069175 1291906 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 14:26:21.069196 1291906 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 14:26:21.069259 1291906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672015
	I0923 14:26:21.097346 1291906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41800 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/embed-certs-672015/id_rsa Username:docker}
	I0923 14:26:21.165405 1291906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 14:26:21.224236 1291906 node_ready.go:35] waiting up to 6m0s for node "embed-certs-672015" to be "Ready" ...
	I0923 14:26:21.318173 1291906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 14:26:21.318243 1291906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0923 14:26:21.372366 1291906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 14:26:21.375568 1291906 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0923 14:26:21.375591 1291906 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0923 14:26:21.403884 1291906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 14:26:21.403911 1291906 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 14:26:21.422184 1291906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 14:26:21.460240 1291906 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0923 14:26:21.460268 1291906 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0923 14:26:21.629224 1291906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 14:26:21.629253 1291906 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 14:26:21.684799 1291906 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0923 14:26:21.684833 1291906 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0923 14:26:21.874202 1291906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 14:26:21.992498 1291906 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0923 14:26:21.992525 1291906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0923 14:26:22.239497 1291906 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0923 14:26:22.239524 1291906 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0923 14:26:22.366282 1291906 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0923 14:26:22.366309 1291906 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0923 14:26:22.412398 1291906 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0923 14:26:22.412425 1291906 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0923 14:26:22.486584 1291906 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0923 14:26:22.486611 1291906 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0923 14:26:22.604742 1291906 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 14:26:22.604836 1291906 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0923 14:26:22.657161 1291906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 14:26:22.409820 1281940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 14:26:22.425513 1281940 api_server.go:72] duration metric: took 5m49.762404341s to wait for apiserver process to appear ...
	I0923 14:26:22.425544 1281940 api_server.go:88] waiting for apiserver healthz status ...
	I0923 14:26:22.425580 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0923 14:26:22.425642 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 14:26:22.496655 1281940 cri.go:89] found id: "9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe"
	I0923 14:26:22.496676 1281940 cri.go:89] found id: "be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf"
	I0923 14:26:22.496681 1281940 cri.go:89] found id: ""
	I0923 14:26:22.496688 1281940 logs.go:276] 2 containers: [9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf]
	I0923 14:26:22.496745 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.501052 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.505379 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0923 14:26:22.505450 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 14:26:22.582433 1281940 cri.go:89] found id: "b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec"
	I0923 14:26:22.582454 1281940 cri.go:89] found id: "b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef"
	I0923 14:26:22.582459 1281940 cri.go:89] found id: ""
	I0923 14:26:22.582466 1281940 logs.go:276] 2 containers: [b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef]
	I0923 14:26:22.582600 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.592957 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.599817 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0923 14:26:22.599889 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 14:26:22.702122 1281940 cri.go:89] found id: "fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e"
	I0923 14:26:22.702144 1281940 cri.go:89] found id: "a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2"
	I0923 14:26:22.702149 1281940 cri.go:89] found id: ""
	I0923 14:26:22.702156 1281940 logs.go:276] 2 containers: [fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2]
	I0923 14:26:22.702219 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.706378 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.713852 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0923 14:26:22.713925 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 14:26:22.804925 1281940 cri.go:89] found id: "850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9"
	I0923 14:26:22.804947 1281940 cri.go:89] found id: "47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76"
	I0923 14:26:22.804952 1281940 cri.go:89] found id: ""
	I0923 14:26:22.804960 1281940 logs.go:276] 2 containers: [850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9 47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76]
	I0923 14:26:22.805021 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.814066 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.820291 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0923 14:26:22.820428 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 14:26:22.902616 1281940 cri.go:89] found id: "c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2"
	I0923 14:26:22.902695 1281940 cri.go:89] found id: "887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025"
	I0923 14:26:22.902714 1281940 cri.go:89] found id: ""
	I0923 14:26:22.902738 1281940 logs.go:276] 2 containers: [c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2 887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025]
	I0923 14:26:22.902845 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.906700 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:22.910619 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 14:26:22.910761 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 14:26:22.996237 1281940 cri.go:89] found id: "3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f"
	I0923 14:26:22.996299 1281940 cri.go:89] found id: "84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629"
	I0923 14:26:22.996329 1281940 cri.go:89] found id: ""
	I0923 14:26:22.996350 1281940 logs.go:276] 2 containers: [3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f 84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629]
	I0923 14:26:22.996436 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.000507 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.004974 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0923 14:26:23.005070 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 14:26:23.118716 1281940 cri.go:89] found id: "88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b"
	I0923 14:26:23.118792 1281940 cri.go:89] found id: "1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff"
	I0923 14:26:23.118811 1281940 cri.go:89] found id: ""
	I0923 14:26:23.118832 1281940 logs.go:276] 2 containers: [88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b 1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff]
	I0923 14:26:23.118921 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.136584 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.144894 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0923 14:26:23.144968 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0923 14:26:23.282871 1281940 cri.go:89] found id: "71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf"
	I0923 14:26:23.282891 1281940 cri.go:89] found id: "283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb"
	I0923 14:26:23.282896 1281940 cri.go:89] found id: ""
	I0923 14:26:23.282904 1281940 logs.go:276] 2 containers: [71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf 283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb]
	I0923 14:26:23.282971 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.288843 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.297143 1281940 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0923 14:26:23.297234 1281940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0923 14:26:23.373768 1281940 cri.go:89] found id: "d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542"
	I0923 14:26:23.373842 1281940 cri.go:89] found id: ""
	I0923 14:26:23.373864 1281940 logs.go:276] 1 containers: [d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542]
	I0923 14:26:23.373956 1281940 ssh_runner.go:195] Run: which crictl
	I0923 14:26:23.383300 1281940 logs.go:123] Gathering logs for etcd [b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec] ...
	I0923 14:26:23.383385 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec"
	I0923 14:26:23.473945 1281940 logs.go:123] Gathering logs for etcd [b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef] ...
	I0923 14:26:23.473974 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef"
	I0923 14:26:23.573792 1281940 logs.go:123] Gathering logs for kube-controller-manager [84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629] ...
	I0923 14:26:23.573820 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629"
	I0923 14:26:23.643899 1281940 logs.go:123] Gathering logs for kube-apiserver [9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe] ...
	I0923 14:26:23.643934 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe"
	I0923 14:26:23.784465 1281940 logs.go:123] Gathering logs for coredns [a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2] ...
	I0923 14:26:23.784500 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2"
	I0923 14:26:23.850037 1281940 logs.go:123] Gathering logs for kube-proxy [c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2] ...
	I0923 14:26:23.850067 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2"
	I0923 14:26:23.930274 1281940 logs.go:123] Gathering logs for kube-proxy [887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025] ...
	I0923 14:26:23.930304 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025"
	I0923 14:26:24.017381 1281940 logs.go:123] Gathering logs for kindnet [88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b] ...
	I0923 14:26:24.017416 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b"
	I0923 14:26:24.112084 1281940 logs.go:123] Gathering logs for kindnet [1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff] ...
	I0923 14:26:24.112114 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff"
	I0923 14:26:24.209793 1281940 logs.go:123] Gathering logs for storage-provisioner [71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf] ...
	I0923 14:26:24.209823 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf"
	I0923 14:26:24.277912 1281940 logs.go:123] Gathering logs for kubernetes-dashboard [d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542] ...
	I0923 14:26:24.277945 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542"
	I0923 14:26:24.352566 1281940 logs.go:123] Gathering logs for coredns [fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e] ...
	I0923 14:26:24.352595 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e"
	I0923 14:26:24.434422 1281940 logs.go:123] Gathering logs for containerd ...
	I0923 14:26:24.434455 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0923 14:26:24.535442 1281940 logs.go:123] Gathering logs for dmesg ...
	I0923 14:26:24.535516 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 14:26:24.564761 1281940 logs.go:123] Gathering logs for describe nodes ...
	I0923 14:26:24.564796 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 14:26:24.813627 1281940 logs.go:123] Gathering logs for kube-apiserver [be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf] ...
	I0923 14:26:24.813665 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf"
	I0923 14:26:24.922628 1281940 logs.go:123] Gathering logs for kube-scheduler [47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76] ...
	I0923 14:26:24.922664 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76"
	I0923 14:26:24.988073 1281940 logs.go:123] Gathering logs for kube-controller-manager [3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f] ...
	I0923 14:26:24.988154 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f"
	I0923 14:26:25.083802 1281940 logs.go:123] Gathering logs for storage-provisioner [283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb] ...
	I0923 14:26:25.083887 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb"
	I0923 14:26:27.148471 1291906 node_ready.go:49] node "embed-certs-672015" has status "Ready":"True"
	I0923 14:26:27.148494 1291906 node_ready.go:38] duration metric: took 5.924158903s for node "embed-certs-672015" to be "Ready" ...
	I0923 14:26:27.148505 1291906 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 14:26:27.195825 1291906 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jkwwl" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.261067 1291906 pod_ready.go:93] pod "coredns-7c65d6cfc9-jkwwl" in "kube-system" namespace has status "Ready":"True"
	I0923 14:26:27.261161 1291906 pod_ready.go:82] duration metric: took 65.260481ms for pod "coredns-7c65d6cfc9-jkwwl" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.261192 1291906 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-672015" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.284371 1291906 pod_ready.go:93] pod "etcd-embed-certs-672015" in "kube-system" namespace has status "Ready":"True"
	I0923 14:26:27.284454 1291906 pod_ready.go:82] duration metric: took 23.226864ms for pod "etcd-embed-certs-672015" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.284487 1291906 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-672015" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.298327 1291906 pod_ready.go:93] pod "kube-apiserver-embed-certs-672015" in "kube-system" namespace has status "Ready":"True"
	I0923 14:26:27.298399 1291906 pod_ready.go:82] duration metric: took 13.876349ms for pod "kube-apiserver-embed-certs-672015" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.298428 1291906 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-672015" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.305019 1291906 pod_ready.go:93] pod "kube-controller-manager-embed-certs-672015" in "kube-system" namespace has status "Ready":"True"
	I0923 14:26:27.305087 1291906 pod_ready.go:82] duration metric: took 6.636652ms for pod "kube-controller-manager-embed-certs-672015" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.305114 1291906 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-twzdj" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.357758 1291906 pod_ready.go:93] pod "kube-proxy-twzdj" in "kube-system" namespace has status "Ready":"True"
	I0923 14:26:27.357839 1291906 pod_ready.go:82] duration metric: took 52.703543ms for pod "kube-proxy-twzdj" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.357868 1291906 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-672015" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.754300 1291906 pod_ready.go:93] pod "kube-scheduler-embed-certs-672015" in "kube-system" namespace has status "Ready":"True"
	I0923 14:26:27.754373 1291906 pod_ready.go:82] duration metric: took 396.4814ms for pod "kube-scheduler-embed-certs-672015" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:27.754400 1291906 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-t5stp" in "kube-system" namespace to be "Ready" ...
	I0923 14:26:25.150389 1281940 logs.go:123] Gathering logs for kubelet ...
	I0923 14:26:25.150461 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 14:26:25.221426 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.434990     660 reflector.go:138] object-"kube-system"/"coredns-token-fq9jh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-fq9jh" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.221648 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435179     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.221864 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435351     660 reflector.go:138] object-"default"/"default-token-mdzzq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-mdzzq" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.222082 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435405     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-xsdtp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-xsdtp" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.222289 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435468     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.222511 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435532     660 reflector.go:138] object-"kube-system"/"metrics-server-token-2jjpk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-2jjpk" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.222722 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435596     660 reflector.go:138] object-"kube-system"/"kindnet-token-9ghmh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-9ghmh" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.222950 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:49 old-k8s-version-545656 kubelet[660]: E0923 14:20:49.435637     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-2r2wr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-2r2wr" is forbidden: User "system:node:old-k8s-version-545656" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-545656' and this object
	W0923 14:26:25.233382 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:51 old-k8s-version-545656 kubelet[660]: E0923 14:20:51.101674     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:25.233784 1281940 logs.go:138] Found kubelet problem: Sep 23 14:20:51 old-k8s-version-545656 kubelet[660]: E0923 14:20:51.779153     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.242709 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:05 old-k8s-version-545656 kubelet[660]: E0923 14:21:05.645094     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:25.244532 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:17 old-k8s-version-545656 kubelet[660]: E0923 14:21:17.635859     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.244866 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:17 old-k8s-version-545656 kubelet[660]: E0923 14:21:17.889963     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.245322 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:18 old-k8s-version-545656 kubelet[660]: E0923 14:21:18.894190     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.246094 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:21 old-k8s-version-545656 kubelet[660]: E0923 14:21:21.908450     660 pod_workers.go:191] Error syncing pod fd80c4ad-2827-4f73-9606-ebd8da196062 ("storage-provisioner_kube-system(fd80c4ad-2827-4f73-9606-ebd8da196062)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fd80c4ad-2827-4f73-9606-ebd8da196062)"
	W0923 14:26:25.246429 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:22 old-k8s-version-545656 kubelet[660]: E0923 14:21:22.447416     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.253319 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:29 old-k8s-version-545656 kubelet[660]: E0923 14:21:29.646284     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:25.254390 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:37 old-k8s-version-545656 kubelet[660]: E0923 14:21:37.960852     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.254719 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:42 old-k8s-version-545656 kubelet[660]: E0923 14:21:42.447507     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.254906 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:43 old-k8s-version-545656 kubelet[660]: E0923 14:21:43.635783     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.255234 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:54 old-k8s-version-545656 kubelet[660]: E0923 14:21:54.639254     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.255450 1281940 logs.go:138] Found kubelet problem: Sep 23 14:21:57 old-k8s-version-545656 kubelet[660]: E0923 14:21:57.635775     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.256042 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:07 old-k8s-version-545656 kubelet[660]: E0923 14:22:07.053536     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.262867 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:10 old-k8s-version-545656 kubelet[660]: E0923 14:22:10.659072     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:25.263212 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:12 old-k8s-version-545656 kubelet[660]: E0923 14:22:12.447387     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.263413 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:25 old-k8s-version-545656 kubelet[660]: E0923 14:22:25.636025     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.263740 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:26 old-k8s-version-545656 kubelet[660]: E0923 14:22:26.635306     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.263924 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:37 old-k8s-version-545656 kubelet[660]: E0923 14:22:37.635772     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.264250 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:41 old-k8s-version-545656 kubelet[660]: E0923 14:22:41.635442     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.264437 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:50 old-k8s-version-545656 kubelet[660]: E0923 14:22:50.636229     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.265025 1281940 logs.go:138] Found kubelet problem: Sep 23 14:22:53 old-k8s-version-545656 kubelet[660]: E0923 14:22:53.191243     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.265354 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:02 old-k8s-version-545656 kubelet[660]: E0923 14:23:02.447537     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.265539 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:04 old-k8s-version-545656 kubelet[660]: E0923 14:23:04.635823     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.265873 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:14 old-k8s-version-545656 kubelet[660]: E0923 14:23:14.636224     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.266058 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:19 old-k8s-version-545656 kubelet[660]: E0923 14:23:19.635919     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.266386 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:25 old-k8s-version-545656 kubelet[660]: E0923 14:23:25.635389     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.273480 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:32 old-k8s-version-545656 kubelet[660]: E0923 14:23:32.645182     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 14:26:25.273830 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:39 old-k8s-version-545656 kubelet[660]: E0923 14:23:39.635944     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.274017 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:47 old-k8s-version-545656 kubelet[660]: E0923 14:23:47.635761     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.274349 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:54 old-k8s-version-545656 kubelet[660]: E0923 14:23:54.635772     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.274538 1281940 logs.go:138] Found kubelet problem: Sep 23 14:23:59 old-k8s-version-545656 kubelet[660]: E0923 14:23:59.635871     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.274863 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:08 old-k8s-version-545656 kubelet[660]: E0923 14:24:08.636021     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.275047 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:14 old-k8s-version-545656 kubelet[660]: E0923 14:24:14.635785     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.275640 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:24 old-k8s-version-545656 kubelet[660]: E0923 14:24:24.457360     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.275827 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:26 old-k8s-version-545656 kubelet[660]: E0923 14:24:26.644873     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.276154 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:32 old-k8s-version-545656 kubelet[660]: E0923 14:24:32.452004     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.276341 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:41 old-k8s-version-545656 kubelet[660]: E0923 14:24:41.635805     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.281642 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:44 old-k8s-version-545656 kubelet[660]: E0923 14:24:44.635490     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.281846 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:54 old-k8s-version-545656 kubelet[660]: E0923 14:24:54.638624     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.282175 1281940 logs.go:138] Found kubelet problem: Sep 23 14:24:56 old-k8s-version-545656 kubelet[660]: E0923 14:24:56.639209     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.282363 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:07 old-k8s-version-545656 kubelet[660]: E0923 14:25:07.635738     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.282716 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:11 old-k8s-version-545656 kubelet[660]: E0923 14:25:11.635425     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.282901 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:18 old-k8s-version-545656 kubelet[660]: E0923 14:25:18.635950     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.283227 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:25 old-k8s-version-545656 kubelet[660]: E0923 14:25:25.635412     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.283421 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:33 old-k8s-version-545656 kubelet[660]: E0923 14:25:33.635781     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.283748 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:40 old-k8s-version-545656 kubelet[660]: E0923 14:25:40.635814     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.283932 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:45 old-k8s-version-545656 kubelet[660]: E0923 14:25:45.635813     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.284260 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: E0923 14:25:52.635815     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.284444 1281940 logs.go:138] Found kubelet problem: Sep 23 14:25:57 old-k8s-version-545656 kubelet[660]: E0923 14:25:57.635993     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.284772 1281940 logs.go:138] Found kubelet problem: Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: E0923 14:26:06.636053     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.284963 1281940 logs.go:138] Found kubelet problem: Sep 23 14:26:12 old-k8s-version-545656 kubelet[660]: E0923 14:26:12.639509     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.285288 1281940 logs.go:138] Found kubelet problem: Sep 23 14:26:18 old-k8s-version-545656 kubelet[660]: E0923 14:26:18.635458     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	I0923 14:26:25.285298 1281940 logs.go:123] Gathering logs for container status ...
	I0923 14:26:25.285312 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 14:26:25.366180 1281940 logs.go:123] Gathering logs for kube-scheduler [850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9] ...
	I0923 14:26:25.366216 1281940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9"
	I0923 14:26:25.443835 1281940 out.go:358] Setting ErrFile to fd 2...
	I0923 14:26:25.443867 1281940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 14:26:25.443913 1281940 out.go:270] X Problems detected in kubelet:
	W0923 14:26:25.443934 1281940 out.go:270]   Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: E0923 14:25:52.635815     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.443949 1281940 out.go:270]   Sep 23 14:25:57 old-k8s-version-545656 kubelet[660]: E0923 14:25:57.635993     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.443966 1281940 out.go:270]   Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: E0923 14:26:06.636053     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	W0923 14:26:25.443973 1281940 out.go:270]   Sep 23 14:26:12 old-k8s-version-545656 kubelet[660]: E0923 14:26:12.639509     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 14:26:25.443985 1281940 out.go:270]   Sep 23 14:26:18 old-k8s-version-545656 kubelet[660]: E0923 14:26:18.635458     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	I0923 14:26:25.443992 1281940 out.go:358] Setting ErrFile to fd 2...
	I0923 14:26:25.443998 1281940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:26:29.760595 1291906 pod_ready.go:103] pod "metrics-server-6867b74b74-t5stp" in "kube-system" namespace has status "Ready":"False"
	I0923 14:26:30.510654 1291906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.138187388s)
	I0923 14:26:30.510708 1291906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.088503941s)
	I0923 14:26:30.543004 1291906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.668758884s)
	I0923 14:26:30.543034 1291906 addons.go:475] Verifying addon metrics-server=true in "embed-certs-672015"
	I0923 14:26:30.633968 1291906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.976711171s)
	I0923 14:26:30.636187 1291906 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-672015 addons enable metrics-server
	
	I0923 14:26:30.638844 1291906 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0923 14:26:30.641427 1291906 addons.go:510] duration metric: took 9.760535494s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0923 14:26:31.761341 1291906 pod_ready.go:103] pod "metrics-server-6867b74b74-t5stp" in "kube-system" namespace has status "Ready":"False"
	I0923 14:26:35.445508 1281940 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0923 14:26:35.460333 1281940 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0923 14:26:35.467954 1281940 out.go:201] 
	W0923 14:26:35.471847 1281940 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0923 14:26:35.471902 1281940 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0923 14:26:35.471923 1281940 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0923 14:26:35.471933 1281940 out.go:270] * 
	W0923 14:26:35.474059 1281940 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 14:26:35.478932 1281940 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	815af577f2b9d       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   3218e6b9c7a31       dashboard-metrics-scraper-8d5bb5db8-kdmx6
	71bb1fd75816f       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   6a8672e4e3cc3       storage-provisioner
	d436677f4263e       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   08ede537831cc       kubernetes-dashboard-cd95d586-n6678
	88a28a705c8d5       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   0921f7cd5101d       kindnet-q9crm
	a833403c433e3       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   412909525a53c       busybox
	c9c7ddc774841       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   c159257d7da4a       kube-proxy-q9njx
	fd0b3672e4226       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   b3de28652b5e9       coredns-74ff55c5b-t25c8
	283e60b92c089       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   6a8672e4e3cc3       storage-provisioner
	3dde24db764bd       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   4025274ea2846       kube-controller-manager-old-k8s-version-545656
	9343974740060       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   eca31cd9425c8       kube-apiserver-old-k8s-version-545656
	850f69c15948f       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   a9b2870803c21       kube-scheduler-old-k8s-version-545656
	b9d9a2bf8276e       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   e5d7c0c1bae18       etcd-old-k8s-version-545656
	9b8ec1693185e       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   ae76a0689edbc       busybox
	a3b9b24a0ac2b       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   30f6287829044       coredns-74ff55c5b-t25c8
	1fc36e956f177       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   26df57b185752       kindnet-q9crm
	887c261910c0a       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   6d6ccda83fbd6       kube-proxy-q9njx
	b5d8783a53d2d       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   6d86ba704fb6f       etcd-old-k8s-version-545656
	47d4a96401e96       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   75be984ee8ae5       kube-scheduler-old-k8s-version-545656
	be9e1205f3d88       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   fe2a3c06dcead       kube-apiserver-old-k8s-version-545656
	84b0540d84740       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   08eaaeb47fdd6       kube-controller-manager-old-k8s-version-545656
	
	
	==> containerd <==
	Sep 23 14:22:52 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:22:52.666465325Z" level=info msg="StartContainer for \"9af47b6badc81e64c8b63ed7fe2542a219d52e2b24624fed185e89dc3db9e673\""
	Sep 23 14:22:52 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:22:52.749794647Z" level=info msg="StartContainer for \"9af47b6badc81e64c8b63ed7fe2542a219d52e2b24624fed185e89dc3db9e673\" returns successfully"
	Sep 23 14:22:52 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:22:52.780507016Z" level=info msg="shim disconnected" id=9af47b6badc81e64c8b63ed7fe2542a219d52e2b24624fed185e89dc3db9e673 namespace=k8s.io
	Sep 23 14:22:52 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:22:52.780821438Z" level=warning msg="cleaning up after shim disconnected" id=9af47b6badc81e64c8b63ed7fe2542a219d52e2b24624fed185e89dc3db9e673 namespace=k8s.io
	Sep 23 14:22:52 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:22:52.780903191Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 14:22:52 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:22:52.797200474Z" level=warning msg="cleanup warnings time=\"2024-09-23T14:22:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Sep 23 14:22:53 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:22:53.193118942Z" level=info msg="RemoveContainer for \"1d9e56f8b90b88238deb4875ee3990cf56871eed42d408be52320801dbf6f0de\""
	Sep 23 14:22:53 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:22:53.200454761Z" level=info msg="RemoveContainer for \"1d9e56f8b90b88238deb4875ee3990cf56871eed42d408be52320801dbf6f0de\" returns successfully"
	Sep 23 14:23:32 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:23:32.636313601Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 14:23:32 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:23:32.642893623Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 23 14:23:32 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:23:32.644672723Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 23 14:23:32 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:23:32.644766818Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 23 14:24:23 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:24:23.645915360Z" level=info msg="CreateContainer within sandbox \"3218e6b9c7a31b05864064795de3d0a02061fbe23d387747a159a968b8a7d200\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 23 14:24:23 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:24:23.677326878Z" level=info msg="CreateContainer within sandbox \"3218e6b9c7a31b05864064795de3d0a02061fbe23d387747a159a968b8a7d200\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da\""
	Sep 23 14:24:23 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:24:23.680518253Z" level=info msg="StartContainer for \"815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da\""
	Sep 23 14:24:23 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:24:23.807531317Z" level=info msg="StartContainer for \"815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da\" returns successfully"
	Sep 23 14:24:23 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:24:23.835953080Z" level=info msg="shim disconnected" id=815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da namespace=k8s.io
	Sep 23 14:24:23 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:24:23.836238128Z" level=warning msg="cleaning up after shim disconnected" id=815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da namespace=k8s.io
	Sep 23 14:24:23 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:24:23.836333781Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 14:24:24 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:24:24.458801598Z" level=info msg="RemoveContainer for \"9af47b6badc81e64c8b63ed7fe2542a219d52e2b24624fed185e89dc3db9e673\""
	Sep 23 14:24:24 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:24:24.465141650Z" level=info msg="RemoveContainer for \"9af47b6badc81e64c8b63ed7fe2542a219d52e2b24624fed185e89dc3db9e673\" returns successfully"
	Sep 23 14:26:25 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:26:25.636144970Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 14:26:25 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:26:25.641807483Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 23 14:26:25 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:26:25.643702544Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 23 14:26:25 old-k8s-version-545656 containerd[565]: time="2024-09-23T14:26:25.643734699Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [a3b9b24a0ac2ba3ffe5ef85a8744793a07f149b0fc9162f4953ce9723cf138e2] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:51322 - 44820 "HINFO IN 70861365222266693.7790772990921289891. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.046454556s
	
	
	==> coredns [fd0b3672e4226f3a9ce49c816bce6a77f780589d4ca60220d5149a170319050e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:34627 - 48332 "HINFO IN 3327194721836673664.2877092076241784848. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01689526s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0923 14:21:21.784772       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 14:20:51.783914098 +0000 UTC m=+0.021547320) (total time: 30.000740911s):
	Trace[2019727887]: [30.000740911s] [30.000740911s] END
	E0923 14:21:21.784804       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0923 14:21:21.785590       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 14:20:51.785167575 +0000 UTC m=+0.022800805) (total time: 30.000404992s):
	Trace[939984059]: [30.000404992s] [30.000404992s] END
	E0923 14:21:21.785605       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0923 14:21:21.785771       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 14:20:51.785531357 +0000 UTC m=+0.023164579) (total time: 30.000227191s):
	Trace[911902081]: [30.000227191s] [30.000227191s] END
	E0923 14:21:21.785782       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-545656
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-545656
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=old-k8s-version-545656
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T14_17_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 14:17:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-545656
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 14:26:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 14:21:49 +0000   Mon, 23 Sep 2024 14:17:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 14:21:49 +0000   Mon, 23 Sep 2024 14:17:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 14:21:49 +0000   Mon, 23 Sep 2024 14:17:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 14:21:49 +0000   Mon, 23 Sep 2024 14:18:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-545656
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f63e38e65be4a44a2ec3933fc87f684
	  System UUID:                be44b794-baf3-48ec-b504-5a62518d4f65
	  Boot ID:                    202f1c12-eb3b-4d2d-8c7a-af93b822fb33
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 coredns-74ff55c5b-t25c8                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m25s
	  kube-system                 etcd-old-k8s-version-545656                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m32s
	  kube-system                 kindnet-q9crm                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m25s
	  kube-system                 kube-apiserver-old-k8s-version-545656             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-controller-manager-old-k8s-version-545656    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-proxy-q9njx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-scheduler-old-k8s-version-545656             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 metrics-server-9975d5f86-vpnpr                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m24s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-kdmx6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-n6678               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m53s (x5 over 8m53s)  kubelet     Node old-k8s-version-545656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m53s (x4 over 8m53s)  kubelet     Node old-k8s-version-545656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m53s (x4 over 8m53s)  kubelet     Node old-k8s-version-545656 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m32s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m32s                  kubelet     Node old-k8s-version-545656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m32s                  kubelet     Node old-k8s-version-545656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m32s                  kubelet     Node old-k8s-version-545656 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m32s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m25s                  kubelet     Node old-k8s-version-545656 status is now: NodeReady
	  Normal  Starting                 8m22s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m57s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m57s (x8 over 5m57s)  kubelet     Node old-k8s-version-545656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s (x8 over 5m57s)  kubelet     Node old-k8s-version-545656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x7 over 5m57s)  kubelet     Node old-k8s-version-545656 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m44s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [b5d8783a53d2d11669f6be2a7eb4b401a5f47cf25c492582727c488ac8a5caef] <==
	raft2024/09/23 14:17:46 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/09/23 14:17:46 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/09/23 14:17:46 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-09-23 14:17:46.537442 I | etcdserver: published {Name:old-k8s-version-545656 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-09-23 14:17:46.538539 I | embed: ready to serve client requests
	2024-09-23 14:17:46.540247 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-23 14:17:46.543723 I | embed: ready to serve client requests
	2024-09-23 14:17:46.545084 I | embed: serving client requests on 192.168.85.2:2379
	2024-09-23 14:17:46.550826 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-23 14:17:46.551199 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-23 14:17:46.552490 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-23 14:17:55.767618 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:18:14.325693 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:18:18.205890 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:18:28.204087 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:18:38.205396 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:18:48.204196 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:18:58.204053 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:19:08.204014 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:19:18.204210 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:19:28.205089 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:19:38.204049 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:19:48.204011 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:19:58.204123 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:20:08.204348 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [b9d9a2bf8276ebafb4a921965fcd0688cba3a3d7f3e36a556fbf1333d4daa3ec] <==
	2024-09-23 14:22:37.392162 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:22:47.392016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:22:57.392122 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:23:07.392066 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:23:17.392559 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:23:27.391900 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:23:37.391940 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:23:47.391966 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:23:57.391984 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:24:07.391946 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:24:17.391854 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:24:27.397879 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:24:37.392650 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:24:47.392149 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:24:57.392006 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:25:07.392052 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:25:17.392150 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:25:27.391937 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:25:37.391907 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:25:47.392079 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:25:57.392094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:26:07.391922 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:26:17.392071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:26:27.391885 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 14:26:37.391916 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 14:26:37 up 1 day, 20:09,  0 users,  load average: 3.07, 2.30, 2.78
	Linux old-k8s-version-545656 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1fc36e956f177195ec7bb2879c68130ee9d4bccedde6d4e4e88de8b21021eaff] <==
	I0923 14:18:18.014487       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0923 14:18:18.014685       1 metrics.go:61] Registering metrics
	I0923 14:18:18.014918       1 controller.go:374] Syncing nftables rules
	I0923 14:18:27.912133       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:18:27.912394       1 main.go:299] handling current node
	I0923 14:18:37.912114       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:18:37.912157       1 main.go:299] handling current node
	I0923 14:18:47.915409       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:18:47.915644       1 main.go:299] handling current node
	I0923 14:18:57.921221       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:18:57.921261       1 main.go:299] handling current node
	I0923 14:19:07.919298       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:19:07.919348       1 main.go:299] handling current node
	I0923 14:19:17.912746       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:19:17.912779       1 main.go:299] handling current node
	I0923 14:19:27.912118       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:19:27.912154       1 main.go:299] handling current node
	I0923 14:19:37.919415       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:19:37.919454       1 main.go:299] handling current node
	I0923 14:19:47.917498       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:19:47.917534       1 main.go:299] handling current node
	I0923 14:19:57.912169       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:19:57.912206       1 main.go:299] handling current node
	I0923 14:20:07.913163       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:20:07.913389       1 main.go:299] handling current node
	
	
	==> kindnet [88a28a705c8d527d740b906dc519289dfabf6ac285b91b0b92bb61d200bfef3b] <==
	I0923 14:24:34.131532       1 main.go:299] handling current node
	I0923 14:24:44.135465       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:24:44.135515       1 main.go:299] handling current node
	I0923 14:24:54.128625       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:24:54.128664       1 main.go:299] handling current node
	I0923 14:25:04.136632       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:25:04.136975       1 main.go:299] handling current node
	I0923 14:25:14.134677       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:25:14.134775       1 main.go:299] handling current node
	I0923 14:25:24.137186       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:25:24.137224       1 main.go:299] handling current node
	I0923 14:25:34.135394       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:25:34.135436       1 main.go:299] handling current node
	I0923 14:25:44.135977       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:25:44.136080       1 main.go:299] handling current node
	I0923 14:25:54.128149       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:25:54.128184       1 main.go:299] handling current node
	I0923 14:26:04.128164       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:26:04.128203       1 main.go:299] handling current node
	I0923 14:26:14.136976       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:26:14.137028       1 main.go:299] handling current node
	I0923 14:26:24.129973       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:26:24.130014       1 main.go:299] handling current node
	I0923 14:26:34.132215       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 14:26:34.132269       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9343974740060da68bd72f33149b0da78610b2c58bd03613fe4252cd0e6098fe] <==
	I0923 14:23:26.170316       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 14:23:26.170327       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0923 14:23:51.938762       1 handler_proxy.go:102] no RequestInfo found in the context
	E0923 14:23:51.938845       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0923 14:23:51.938859       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0923 14:24:06.307427       1 client.go:360] parsed scheme: "passthrough"
	I0923 14:24:06.307474       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 14:24:06.307483       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 14:24:40.707433       1 client.go:360] parsed scheme: "passthrough"
	I0923 14:24:40.707489       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 14:24:40.707498       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 14:25:18.015451       1 client.go:360] parsed scheme: "passthrough"
	I0923 14:25:18.015502       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 14:25:18.015511       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0923 14:25:50.492681       1 handler_proxy.go:102] no RequestInfo found in the context
	E0923 14:25:50.492759       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0923 14:25:50.492768       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0923 14:25:59.607375       1 client.go:360] parsed scheme: "passthrough"
	I0923 14:25:59.607430       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 14:25:59.607438       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 14:26:30.728473       1 client.go:360] parsed scheme: "passthrough"
	I0923 14:26:30.728521       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 14:26:30.728530       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [be9e1205f3d882e314d76a4c654b68bbe10c826131ec8685579853378c256cdf] <==
	I0923 14:17:54.533624       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0923 14:17:54.985365       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 14:17:55.086045       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0923 14:17:55.239148       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0923 14:17:55.240488       1 controller.go:606] quota admission added evaluator for: endpoints
	I0923 14:17:55.246767       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 14:17:55.530098       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 14:17:56.174927       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0923 14:17:56.707208       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0923 14:17:56.756112       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0923 14:18:12.330235       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0923 14:18:12.398022       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0923 14:18:22.164153       1 client.go:360] parsed scheme: "passthrough"
	I0923 14:18:22.164243       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 14:18:22.164269       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 14:18:59.276047       1 client.go:360] parsed scheme: "passthrough"
	I0923 14:18:59.276092       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 14:18:59.276101       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 14:19:30.619024       1 client.go:360] parsed scheme: "passthrough"
	I0923 14:19:30.619073       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 14:19:30.619082       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 14:20:05.423147       1 client.go:360] parsed scheme: "passthrough"
	I0923 14:20:05.423421       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 14:20:05.423552       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0923 14:20:11.193251       1 upgradeaware.go:387] Error proxying data from backend to client: tls: use of closed connection
	
	
	==> kube-controller-manager [3dde24db764bda6d4e042e67ecc98c0f0cc7fdeb8d0a7fd2c72ce1e26852e86f] <==
	W0923 14:22:13.451900       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 14:22:39.518798       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 14:22:45.102449       1 request.go:655] Throttling request took 1.048042854s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0923 14:22:45.953823       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 14:23:10.021327       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 14:23:17.604463       1 request.go:655] Throttling request took 1.048304261s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0923 14:23:18.456040       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 14:23:40.523250       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 14:23:50.106465       1 request.go:655] Throttling request took 1.048444324s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0923 14:23:50.957868       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 14:24:11.025278       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 14:24:22.608287       1 request.go:655] Throttling request took 1.048473771s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0923 14:24:23.459830       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 14:24:41.528109       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 14:24:55.110357       1 request.go:655] Throttling request took 1.048141561s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0923 14:24:55.961951       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 14:25:12.030702       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 14:25:27.612377       1 request.go:655] Throttling request took 1.048343314s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0923 14:25:28.463849       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 14:25:42.532674       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 14:26:00.123911       1 request.go:655] Throttling request took 1.057856314s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0923 14:26:00.966148       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 14:26:13.035288       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 14:26:32.616525       1 request.go:655] Throttling request took 1.048108911s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0923 14:26:33.468057       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [84b0540d8474082f19ceb8e78549d61b293cb8d4fdacc1be7accd947b2a37629] <==
	W0923 14:18:12.461061       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-545656. Assuming now as a timestamp.
	I0923 14:18:12.461408       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0923 14:18:12.462092       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0923 14:18:12.464128       1 event.go:291] "Event occurred" object="old-k8s-version-545656" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-545656 event: Registered Node old-k8s-version-545656 in Controller"
	I0923 14:18:12.464879       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0923 14:18:12.466133       1 range_allocator.go:373] Set node old-k8s-version-545656 PodCIDR to [10.244.0.0/24]
	I0923 14:18:12.472822       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q9njx"
	I0923 14:18:12.502947       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0923 14:18:12.507013       1 shared_informer.go:247] Caches are synced for resource quota 
	I0923 14:18:12.582944       1 shared_informer.go:247] Caches are synced for job 
	I0923 14:18:12.603940       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-q9crm"
	I0923 14:18:12.670316       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-mdjnm"
	I0923 14:18:12.717452       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0923 14:18:12.974722       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0923 14:18:12.974782       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0923 14:18:13.019229       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0923 14:18:13.367268       1 request.go:655] Throttling request took 1.053847549s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	I0923 14:18:14.161177       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0923 14:18:14.161213       1 shared_informer.go:247] Caches are synced for resource quota 
	I0923 14:18:15.741819       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0923 14:18:15.768027       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-mdjnm"
	I0923 14:18:17.461887       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0923 14:20:12.137601       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0923 14:20:12.191573       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0923 14:20:12.221556       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	
	
	==> kube-proxy [887c261910c0ac2a1a37fcf27bc576ab5b3f393d0a37e6261d52e33adc91d025] <==
	I0923 14:18:14.937798       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0923 14:18:14.937882       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0923 14:18:15.035926       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0923 14:18:15.036036       1 server_others.go:185] Using iptables Proxier.
	I0923 14:18:15.036419       1 server.go:650] Version: v1.20.0
	I0923 14:18:15.047638       1 config.go:315] Starting service config controller
	I0923 14:18:15.047705       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0923 14:18:15.050231       1 config.go:224] Starting endpoint slice config controller
	I0923 14:18:15.050251       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0923 14:18:15.147905       1 shared_informer.go:247] Caches are synced for service config 
	I0923 14:18:15.150394       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [c9c7ddc774841a7c6efaaa62977175f48b5dedf2367b9fc067ed9b2a5bc363d2] <==
	I0923 14:20:53.463472       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0923 14:20:53.463616       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0923 14:20:53.484596       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0923 14:20:53.484885       1 server_others.go:185] Using iptables Proxier.
	I0923 14:20:53.485242       1 server.go:650] Version: v1.20.0
	I0923 14:20:53.486130       1 config.go:315] Starting service config controller
	I0923 14:20:53.487795       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0923 14:20:53.486439       1 config.go:224] Starting endpoint slice config controller
	I0923 14:20:53.487832       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0923 14:20:53.587978       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0923 14:20:53.588032       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [47d4a96401e96f9bdad8ea7163b2276594b5e8f5a2b49d484ffcd0f293ba4f76] <==
	I0923 14:17:48.167575       1 serving.go:331] Generated self-signed cert in-memory
	W0923 14:17:53.717715       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 14:17:53.717750       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 14:17:53.717766       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 14:17:53.717788       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 14:17:53.821214       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0923 14:17:53.831420       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 14:17:53.831508       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 14:17:53.831551       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0923 14:17:53.884780       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 14:17:53.885052       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 14:17:53.885118       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 14:17:53.885183       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 14:17:53.885241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 14:17:53.885296       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 14:17:53.885348       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 14:17:53.885498       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 14:17:53.885659       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 14:17:53.885745       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 14:17:53.886772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 14:17:53.886919       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0923 14:17:55.531712       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [850f69c15948f400f07998618b8a98a1a66b3958ce81a0910da849567fd833f9] <==
	I0923 14:20:45.590001       1 serving.go:331] Generated self-signed cert in-memory
	W0923 14:20:49.395305       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 14:20:49.395445       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 14:20:49.395456       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 14:20:49.395463       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 14:20:49.490789       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0923 14:20:49.499466       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 14:20:49.499490       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 14:20:49.491560       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0923 14:20:49.599559       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 23 14:25:07 old-k8s-version-545656 kubelet[660]: E0923 14:25:07.635738     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 14:25:11 old-k8s-version-545656 kubelet[660]: I0923 14:25:11.635019     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da
	Sep 23 14:25:11 old-k8s-version-545656 kubelet[660]: E0923 14:25:11.635425     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	Sep 23 14:25:18 old-k8s-version-545656 kubelet[660]: E0923 14:25:18.635950     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 14:25:25 old-k8s-version-545656 kubelet[660]: I0923 14:25:25.635043     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da
	Sep 23 14:25:25 old-k8s-version-545656 kubelet[660]: E0923 14:25:25.635412     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	Sep 23 14:25:33 old-k8s-version-545656 kubelet[660]: E0923 14:25:33.635781     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 14:25:40 old-k8s-version-545656 kubelet[660]: I0923 14:25:40.635470     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da
	Sep 23 14:25:40 old-k8s-version-545656 kubelet[660]: E0923 14:25:40.635814     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	Sep 23 14:25:45 old-k8s-version-545656 kubelet[660]: E0923 14:25:45.635813     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: I0923 14:25:52.634986     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da
	Sep 23 14:25:52 old-k8s-version-545656 kubelet[660]: E0923 14:25:52.635815     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	Sep 23 14:25:57 old-k8s-version-545656 kubelet[660]: E0923 14:25:57.635993     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: I0923 14:26:06.635166     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da
	Sep 23 14:26:06 old-k8s-version-545656 kubelet[660]: E0923 14:26:06.636053     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	Sep 23 14:26:12 old-k8s-version-545656 kubelet[660]: E0923 14:26:12.639509     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 14:26:18 old-k8s-version-545656 kubelet[660]: I0923 14:26:18.634990     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da
	Sep 23 14:26:18 old-k8s-version-545656 kubelet[660]: E0923 14:26:18.635458     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	Sep 23 14:26:25 old-k8s-version-545656 kubelet[660]: E0923 14:26:25.644019     660 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 23 14:26:25 old-k8s-version-545656 kubelet[660]: E0923 14:26:25.644074     660 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 23 14:26:25 old-k8s-version-545656 kubelet[660]: E0923 14:26:25.644222     660 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-2jjpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-vpnpr_kube-system(ea59d94
4-5cc9-4ac9-88fe-7244bec9f7c2): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 23 14:26:25 old-k8s-version-545656 kubelet[660]: E0923 14:26:25.644261     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 23 14:26:30 old-k8s-version-545656 kubelet[660]: I0923 14:26:30.635475     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 815af577f2b9d9a451d8beecbeebc7e7488d48a70d699501a94ac6e03b92b6da
	Sep 23 14:26:30 old-k8s-version-545656 kubelet[660]: E0923 14:26:30.635812     660 pod_workers.go:191] Error syncing pod 064b7bcd-c4b2-4a4e-83e2-a26d808fe715 ("dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kdmx6_kubernetes-dashboard(064b7bcd-c4b2-4a4e-83e2-a26d808fe715)"
	Sep 23 14:26:37 old-k8s-version-545656 kubelet[660]: E0923 14:26:37.655244     660 pod_workers.go:191] Error syncing pod ea59d944-5cc9-4ac9-88fe-7244bec9f7c2 ("metrics-server-9975d5f86-vpnpr_kube-system(ea59d944-5cc9-4ac9-88fe-7244bec9f7c2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [d436677f4263ed0ad8726b46d1748e6db9bf9362ca86a377d375230938839542] <==
	2024/09/23 14:21:12 Using namespace: kubernetes-dashboard
	2024/09/23 14:21:12 Using in-cluster config to connect to apiserver
	2024/09/23 14:21:12 Using secret token for csrf signing
	2024/09/23 14:21:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/23 14:21:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/23 14:21:12 Successful initial request to the apiserver, version: v1.20.0
	2024/09/23 14:21:12 Generating JWE encryption key
	2024/09/23 14:21:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/23 14:21:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/23 14:21:12 Initializing JWE encryption key from synchronized object
	2024/09/23 14:21:12 Creating in-cluster Sidecar client
	2024/09/23 14:21:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 14:21:12 Serving insecurely on HTTP port: 9090
	2024/09/23 14:21:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 14:22:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 14:22:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 14:23:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 14:23:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 14:24:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 14:24:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 14:25:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 14:25:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 14:26:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 14:21:12 Starting overwatch
	
	
	==> storage-provisioner [283e60b92c0899bbba55dee84e115e46cec771da843548c50323ce5c770e68eb] <==
	I0923 14:20:51.481763       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0923 14:21:21.483925       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [71bb1fd75816fca4569a2dda9e4f08b1fa69af032a89eee45b74edbd7cd7fadf] <==
	I0923 14:21:33.747082       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 14:21:33.771940       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 14:21:33.772175       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 14:21:51.242264       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 14:21:51.242493       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-545656_174acc73-de7b-4d5e-a87d-0ee183e9d82e!
	I0923 14:21:51.243294       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a30ff922-769f-4d1c-9b86-5d3f43346cb3", APIVersion:"v1", ResourceVersion:"863", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-545656_174acc73-de7b-4d5e-a87d-0ee183e9d82e became leader
	I0923 14:21:51.342988       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-545656_174acc73-de7b-4d5e-a87d-0ee183e9d82e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-545656 -n old-k8s-version-545656
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-545656 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-vpnpr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-545656 describe pod metrics-server-9975d5f86-vpnpr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-545656 describe pod metrics-server-9975d5f86-vpnpr: exit status 1 (138.753792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-vpnpr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-545656 describe pod metrics-server-9975d5f86-vpnpr: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.76s)

                                                
                                    

Test pass (298/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.13
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 8
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 266.78
31 TestAddons/serial/GCPAuth/Namespaces 0.17
33 TestAddons/parallel/Registry 14.96
34 TestAddons/parallel/Ingress 18.78
35 TestAddons/parallel/InspektorGadget 11.84
36 TestAddons/parallel/MetricsServer 6.03
38 TestAddons/parallel/CSI 42.45
39 TestAddons/parallel/Headlamp 17.22
40 TestAddons/parallel/CloudSpanner 5.66
41 TestAddons/parallel/LocalPath 53.04
42 TestAddons/parallel/NvidiaDevicePlugin 5.63
43 TestAddons/parallel/Yakd 11.82
44 TestAddons/StoppedEnableDisable 12.3
45 TestCertOptions 42.1
46 TestCertExpiration 232.18
48 TestForceSystemdFlag 42.84
49 TestForceSystemdEnv 37.4
50 TestDockerEnvContainerd 42.82
55 TestErrorSpam/setup 29.28
56 TestErrorSpam/start 0.77
57 TestErrorSpam/status 1.1
58 TestErrorSpam/pause 1.83
59 TestErrorSpam/unpause 1.82
60 TestErrorSpam/stop 1.5
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 52.24
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 6.51
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.11
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.16
72 TestFunctional/serial/CacheCmd/cache/add_local 1.31
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.02
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 45.31
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.73
83 TestFunctional/serial/LogsFileCmd 1.74
84 TestFunctional/serial/InvalidService 6.03
86 TestFunctional/parallel/ConfigCmd 0.43
87 TestFunctional/parallel/DashboardCmd 9.53
88 TestFunctional/parallel/DryRun 0.38
89 TestFunctional/parallel/InternationalLanguage 0.22
90 TestFunctional/parallel/StatusCmd 1.27
94 TestFunctional/parallel/ServiceCmdConnect 8.81
95 TestFunctional/parallel/AddonsCmd 0.21
96 TestFunctional/parallel/PersistentVolumeClaim 26.62
98 TestFunctional/parallel/SSHCmd 0.75
99 TestFunctional/parallel/CpCmd 2.07
101 TestFunctional/parallel/FileSync 0.39
102 TestFunctional/parallel/CertSync 2.19
106 TestFunctional/parallel/NodeLabels 0.1
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
110 TestFunctional/parallel/License 0.3
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.32
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
124 TestFunctional/parallel/ServiceCmd/List 0.73
125 TestFunctional/parallel/ProfileCmd/profile_list 0.6
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
128 TestFunctional/parallel/MountCmd/any-port 7.68
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
130 TestFunctional/parallel/ServiceCmd/Format 0.38
131 TestFunctional/parallel/ServiceCmd/URL 0.46
132 TestFunctional/parallel/MountCmd/specific-port 1.42
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.65
134 TestFunctional/parallel/Version/short 0.06
135 TestFunctional/parallel/Version/components 1.32
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
141 TestFunctional/parallel/ImageCommands/Setup 0.71
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.51
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.69
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.87
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 124.26
159 TestMultiControlPlane/serial/DeployApp 31.61
160 TestMultiControlPlane/serial/PingHostFromPods 1.66
161 TestMultiControlPlane/serial/AddWorkerNode 24.18
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
164 TestMultiControlPlane/serial/CopyFile 19.24
165 TestMultiControlPlane/serial/StopSecondaryNode 12.91
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
167 TestMultiControlPlane/serial/RestartSecondaryNode 18.34
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.06
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 122.36
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.71
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
172 TestMultiControlPlane/serial/StopCluster 36.13
173 TestMultiControlPlane/serial/RestartCluster 76.65
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
175 TestMultiControlPlane/serial/AddSecondaryNode 44.76
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
180 TestJSONOutput/start/Command 84.79
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.75
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.68
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.76
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.22
205 TestKicCustomNetwork/create_custom_network 40.68
206 TestKicCustomNetwork/use_default_bridge_network 36.62
207 TestKicExistingNetwork 32.75
208 TestKicCustomSubnet 33.34
209 TestKicStaticIP 34.53
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 68.45
214 TestMountStart/serial/StartWithMountFirst 6.4
215 TestMountStart/serial/VerifyMountFirst 0.26
216 TestMountStart/serial/StartWithMountSecond 5.99
217 TestMountStart/serial/VerifyMountSecond 0.26
218 TestMountStart/serial/DeleteFirst 1.65
219 TestMountStart/serial/VerifyMountPostDelete 0.25
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 7.36
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 93.2
226 TestMultiNode/serial/DeployApp2Nodes 17.62
227 TestMultiNode/serial/PingHostFrom2Pods 0.95
228 TestMultiNode/serial/AddNode 20.07
229 TestMultiNode/serial/MultiNodeLabels 0.1
230 TestMultiNode/serial/ProfileList 0.7
231 TestMultiNode/serial/CopyFile 9.97
232 TestMultiNode/serial/StopNode 2.27
233 TestMultiNode/serial/StartAfterStop 9.97
234 TestMultiNode/serial/RestartKeepsNodes 105
235 TestMultiNode/serial/DeleteNode 5.54
236 TestMultiNode/serial/StopMultiNode 24.06
237 TestMultiNode/serial/RestartMultiNode 47.93
238 TestMultiNode/serial/ValidateNameConflict 36.67
243 TestPreload 111.8
245 TestScheduledStopUnix 107.51
248 TestInsufficientStorage 9.9
249 TestRunningBinaryUpgrade 87.05
251 TestKubernetesUpgrade 109.7
252 TestMissingContainerUpgrade 149.62
254 TestPause/serial/Start 65.58
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
257 TestNoKubernetes/serial/StartWithK8s 44.28
258 TestNoKubernetes/serial/StartWithStopK8s 8.26
259 TestNoKubernetes/serial/Start 6.06
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
261 TestNoKubernetes/serial/ProfileList 1.03
262 TestNoKubernetes/serial/Stop 1.22
263 TestNoKubernetes/serial/StartNoArgs 7.72
264 TestPause/serial/SecondStartNoReconfiguration 7.58
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
269 TestPause/serial/Pause 0.89
270 TestPause/serial/VerifyStatus 0.38
271 TestPause/serial/Unpause 0.82
272 TestPause/serial/PauseAgain 1.11
277 TestNetworkPlugins/group/false 4.86
278 TestPause/serial/DeletePaused 2.88
279 TestPause/serial/VerifyDeletedResources 0.17
283 TestStoppedBinaryUpgrade/Setup 1.34
284 TestStoppedBinaryUpgrade/Upgrade 133.23
285 TestStoppedBinaryUpgrade/MinikubeLogs 1
293 TestNetworkPlugins/group/auto/Start 95.63
294 TestNetworkPlugins/group/flannel/Start 52.24
295 TestNetworkPlugins/group/auto/KubeletFlags 0.37
296 TestNetworkPlugins/group/auto/NetCatPod 10.43
297 TestNetworkPlugins/group/auto/DNS 0.3
298 TestNetworkPlugins/group/auto/Localhost 0.19
299 TestNetworkPlugins/group/auto/HairPin 0.26
300 TestNetworkPlugins/group/calico/Start 71.41
301 TestNetworkPlugins/group/flannel/ControllerPod 6.01
302 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
303 TestNetworkPlugins/group/flannel/NetCatPod 11.37
304 TestNetworkPlugins/group/flannel/DNS 0.3
305 TestNetworkPlugins/group/flannel/Localhost 0.24
306 TestNetworkPlugins/group/flannel/HairPin 0.29
307 TestNetworkPlugins/group/custom-flannel/Start 55.24
308 TestNetworkPlugins/group/calico/ControllerPod 6
309 TestNetworkPlugins/group/calico/KubeletFlags 0.42
310 TestNetworkPlugins/group/calico/NetCatPod 11.4
311 TestNetworkPlugins/group/calico/DNS 0.19
312 TestNetworkPlugins/group/calico/Localhost 0.17
313 TestNetworkPlugins/group/calico/HairPin 0.18
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.39
316 TestNetworkPlugins/group/kindnet/Start 96.1
317 TestNetworkPlugins/group/custom-flannel/DNS 0.24
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
320 TestNetworkPlugins/group/bridge/Start 78.27
321 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
322 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
323 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
325 TestNetworkPlugins/group/bridge/NetCatPod 9.28
326 TestNetworkPlugins/group/kindnet/DNS 0.24
327 TestNetworkPlugins/group/kindnet/Localhost 0.21
328 TestNetworkPlugins/group/kindnet/HairPin 0.2
329 TestNetworkPlugins/group/bridge/DNS 0.24
330 TestNetworkPlugins/group/bridge/Localhost 0.27
331 TestNetworkPlugins/group/bridge/HairPin 0.21
332 TestNetworkPlugins/group/enable-default-cni/Start 47.49
334 TestStartStop/group/old-k8s-version/serial/FirstStart 175.94
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.46
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
341 TestStartStop/group/no-preload/serial/FirstStart 62.24
342 TestStartStop/group/no-preload/serial/DeployApp 8.35
343 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
344 TestStartStop/group/no-preload/serial/Stop 12.11
345 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
346 TestStartStop/group/no-preload/serial/SecondStart 266.74
347 TestStartStop/group/old-k8s-version/serial/DeployApp 9.91
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.46
349 TestStartStop/group/old-k8s-version/serial/Stop 12.2
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
352 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
353 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
354 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
355 TestStartStop/group/no-preload/serial/Pause 3.08
357 TestStartStop/group/embed-certs/serial/FirstStart 80.35
358 TestStartStop/group/embed-certs/serial/DeployApp 9.37
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.49
360 TestStartStop/group/embed-certs/serial/Stop 12.09
361 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
362 TestStartStop/group/embed-certs/serial/SecondStart 267.59
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
365 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
366 TestStartStop/group/old-k8s-version/serial/Pause 2.96
368 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.58
369 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.36
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.23
371 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 265.42
374 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
376 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
377 TestStartStop/group/embed-certs/serial/Pause 3.08
379 TestStartStop/group/newest-cni/serial/FirstStart 38.65
380 TestStartStop/group/newest-cni/serial/DeployApp 0
381 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.25
382 TestStartStop/group/newest-cni/serial/Stop 1.29
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
384 TestStartStop/group/newest-cni/serial/SecondStart 16.25
385 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
388 TestStartStop/group/newest-cni/serial/Pause 3.34
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.92
x
+
TestDownloadOnly/v1.20.0/json-events (7.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-234829 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-234829 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.129372417s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 13:23:05.851244 1033616 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0923 13:23:05.851348 1033616 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-234829
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-234829: exit status 85 (64.823155ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-234829 | jenkins | v1.34.0 | 23 Sep 24 13:22 UTC |          |
	|         | -p download-only-234829        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:22:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:22:58.762925 1033621 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:22:58.763134 1033621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:22:58.763155 1033621 out.go:358] Setting ErrFile to fd 2...
	I0923 13:22:58.763174 1033621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:22:58.763569 1033621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	W0923 13:22:58.763776 1033621 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19690-1028234/.minikube/config/config.json: open /home/jenkins/minikube-integration/19690-1028234/.minikube/config/config.json: no such file or directory
	I0923 13:22:58.764243 1033621 out.go:352] Setting JSON to true
	I0923 13:22:58.765152 1033621 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":155125,"bootTime":1726942654,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0923 13:22:58.765274 1033621 start.go:139] virtualization:  
	I0923 13:22:58.769257 1033621 out.go:97] [download-only-234829] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0923 13:22:58.769461 1033621 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 13:22:58.769560 1033621 notify.go:220] Checking for updates...
	I0923 13:22:58.772860 1033621 out.go:169] MINIKUBE_LOCATION=19690
	I0923 13:22:58.775485 1033621 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:22:58.778235 1033621 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 13:22:58.780699 1033621 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	I0923 13:22:58.784025 1033621 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 13:22:58.788940 1033621 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 13:22:58.789227 1033621 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:22:58.813935 1033621 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:22:58.814043 1033621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:22:58.874691 1033621 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:22:58.865003454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:22:58.874805 1033621 docker.go:318] overlay module found
	I0923 13:22:58.877473 1033621 out.go:97] Using the docker driver based on user configuration
	I0923 13:22:58.877501 1033621 start.go:297] selected driver: docker
	I0923 13:22:58.877508 1033621 start.go:901] validating driver "docker" against <nil>
	I0923 13:22:58.877636 1033621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:22:58.936672 1033621 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:22:58.927380122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:22:58.936916 1033621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:22:58.937198 1033621 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 13:22:58.937349 1033621 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 13:22:58.940260 1033621 out.go:169] Using Docker driver with root privileges
	I0923 13:22:58.942717 1033621 cni.go:84] Creating CNI manager for ""
	I0923 13:22:58.942772 1033621 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 13:22:58.942785 1033621 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 13:22:58.942873 1033621 start.go:340] cluster config:
	{Name:download-only-234829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-234829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:22:58.945529 1033621 out.go:97] Starting "download-only-234829" primary control-plane node in "download-only-234829" cluster
	I0923 13:22:58.945565 1033621 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 13:22:58.948128 1033621 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:22:58.948165 1033621 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0923 13:22:58.948335 1033621 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:22:58.965531 1033621 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:22:58.966369 1033621 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 13:22:58.966475 1033621 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:22:59.012707 1033621 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0923 13:22:59.012743 1033621 cache.go:56] Caching tarball of preloaded images
	I0923 13:22:59.013306 1033621 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0923 13:22:59.016235 1033621 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 13:22:59.016268 1033621 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0923 13:22:59.097792 1033621 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-234829 host does not exist
	  To start a cluster, run: "minikube start -p download-only-234829"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-234829
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-021106 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-021106 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.001287685s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 13:23:14.255652 1033616 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0923 13:23:14.255695 1033616 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-021106
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-021106: exit status 85 (82.209421ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-234829 | jenkins | v1.34.0 | 23 Sep 24 13:22 UTC |                     |
	|         | -p download-only-234829        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	| delete  | -p download-only-234829        | download-only-234829 | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	| start   | -o=json --download-only        | download-only-021106 | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC |                     |
	|         | -p download-only-021106        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:23:06
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:23:06.298432 1033819 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:23:06.298633 1033819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:23:06.298662 1033819 out.go:358] Setting ErrFile to fd 2...
	I0923 13:23:06.298684 1033819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:23:06.298950 1033819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	I0923 13:23:06.299403 1033819 out.go:352] Setting JSON to true
	I0923 13:23:06.300305 1033819 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":155133,"bootTime":1726942654,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0923 13:23:06.300404 1033819 start.go:139] virtualization:  
	I0923 13:23:06.302168 1033819 out.go:97] [download-only-021106] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:23:06.302437 1033819 notify.go:220] Checking for updates...
	I0923 13:23:06.304032 1033819 out.go:169] MINIKUBE_LOCATION=19690
	I0923 13:23:06.305963 1033819 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:23:06.307459 1033819 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 13:23:06.308610 1033819 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	I0923 13:23:06.309900 1033819 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 13:23:06.312511 1033819 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 13:23:06.312799 1033819 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:23:06.333786 1033819 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:23:06.333902 1033819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:23:06.397389 1033819 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 13:23:06.38761217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:23:06.397500 1033819 docker.go:318] overlay module found
	I0923 13:23:06.398951 1033819 out.go:97] Using the docker driver based on user configuration
	I0923 13:23:06.398980 1033819 start.go:297] selected driver: docker
	I0923 13:23:06.398988 1033819 start.go:901] validating driver "docker" against <nil>
	I0923 13:23:06.399111 1033819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:23:06.451221 1033819 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 13:23:06.442277753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:23:06.451466 1033819 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:23:06.451792 1033819 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 13:23:06.451947 1033819 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 13:23:06.453378 1033819 out.go:169] Using Docker driver with root privileges
	I0923 13:23:06.454733 1033819 cni.go:84] Creating CNI manager for ""
	I0923 13:23:06.454785 1033819 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 13:23:06.454798 1033819 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 13:23:06.454891 1033819 start.go:340] cluster config:
	{Name:download-only-021106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-021106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:23:06.456243 1033819 out.go:97] Starting "download-only-021106" primary control-plane node in "download-only-021106" cluster
	I0923 13:23:06.456267 1033819 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 13:23:06.457384 1033819 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:23:06.457409 1033819 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 13:23:06.457509 1033819 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:23:06.473077 1033819 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:23:06.473232 1033819 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 13:23:06.473252 1033819 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 13:23:06.473256 1033819 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 13:23:06.473264 1033819 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 13:23:06.528332 1033819 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0923 13:23:06.528356 1033819 cache.go:56] Caching tarball of preloaded images
	I0923 13:23:06.528516 1033819 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 13:23:06.530076 1033819 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 13:23:06.530101 1033819 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0923 13:23:06.614850 1033819 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19690-1028234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-021106 host does not exist
	  To start a cluster, run: "minikube start -p download-only-021106"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-021106
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 13:23:15.518782 1033616 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-015618 --alsologtostderr --binary-mirror http://127.0.0.1:38111 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-015618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-015618
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-095355
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-095355: exit status 85 (65.42323ms)

                                                
                                                
-- stdout --
	* Profile "addons-095355" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-095355"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-095355
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-095355: exit status 85 (72.876411ms)

                                                
                                                
-- stdout --
	* Profile "addons-095355" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-095355"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (266.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-095355 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-095355 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (4m26.776013212s)
--- PASS: TestAddons/Setup (266.78s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-095355 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-095355 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.14649ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-k2d2s" [08ec5d70-1841-4275-80d9-904261052f24] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003790055s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mg8qm" [25545c68-e953-40fe-b4be-67ff7a5e0e3d] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004293287s
addons_test.go:338: (dbg) Run:  kubectl --context addons-095355 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-095355 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-095355 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.922418609s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 ip
2024/09/23 13:31:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.96s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-095355 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-095355 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-095355 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b4d72379-0180-4aac-b324-efadf91bbf90] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b4d72379-0180-4aac-b324-efadf91bbf90] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003558962s
I0923 13:32:54.499622 1033616 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-095355 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-095355 addons disable ingress-dns --alsologtostderr -v=1: (1.186745032s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-095355 addons disable ingress --alsologtostderr -v=1: (7.877874984s)
--- PASS: TestAddons/parallel/Ingress (18.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4bwtr" [119b26e6-b744-4543-96a4-112aa3284ecd] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004808623s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-095355
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-095355: (5.835692653s)
--- PASS: TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.03s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.575518ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-8qvf4" [257c8283-1a4c-40b3-bfc8-621bf39df1e3] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004076269s
addons_test.go:413: (dbg) Run:  kubectl --context addons-095355 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.03s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 13:32:03.392028 1033616 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 13:32:03.397592 1033616 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 13:32:03.397620 1033616 kapi.go:107] duration metric: took 7.403196ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.412557ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-095355 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-095355 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9950f5b9-6fe0-431e-9b8f-48aac9a087e0] Pending
helpers_test.go:344: "task-pv-pod" [9950f5b9-6fe0-431e-9b8f-48aac9a087e0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9950f5b9-6fe0-431e-9b8f-48aac9a087e0] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004529061s
addons_test.go:528: (dbg) Run:  kubectl --context addons-095355 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-095355 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-095355 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-095355 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-095355 delete pod task-pv-pod: (1.117469501s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-095355 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-095355 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-095355 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [63cf86f1-39ef-4483-af9d-fb4f4b80ac66] Pending
helpers_test.go:344: "task-pv-pod-restore" [63cf86f1-39ef-4483-af9d-fb4f4b80ac66] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [63cf86f1-39ef-4483-af9d-fb4f4b80ac66] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.016294004s
addons_test.go:570: (dbg) Run:  kubectl --context addons-095355 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-095355 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-095355 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-095355 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.900471152s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.45s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-095355 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-095355 --alsologtostderr -v=1: (1.416648691s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-wjq6r" [999dd198-8a8b-46f8-8d01-d3773b7b168e] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-wjq6r" [999dd198-8a8b-46f8-8d01-d3773b7b168e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-wjq6r" [999dd198-8a8b-46f8-8d01-d3773b7b168e] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004342259s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-095355 addons disable headlamp --alsologtostderr -v=1: (5.794616499s)
--- PASS: TestAddons/parallel/Headlamp (17.22s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-8svqk" [46d499d7-22b3-4ad9-a452-7b60c8b16d6a] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00400929s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-095355
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-095355 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-095355 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095355 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9dc51625-3fe1-48bc-a6a1-48fafb4c5d89] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9dc51625-3fe1-48bc-a6a1-48fafb4c5d89] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9dc51625-3fe1-48bc-a6a1-48fafb4c5d89] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00419388s
addons_test.go:938: (dbg) Run:  kubectl --context addons-095355 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 ssh "cat /opt/local-path-provisioner/pvc-8ed31a51-0b5c-451d-9f0b-c3f88e0a39fa_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-095355 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-095355 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-095355 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.681771385s)
--- PASS: TestAddons/parallel/LocalPath (53.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mm7dj" [8ef39fa4-b9a6-4677-a1c2-02424564ea03] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004302075s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-095355
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-w88np" [b9a0366b-13a9-4620-9e6b-9a550dd51276] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003941898s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-095355 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-095355 addons disable yakd --alsologtostderr -v=1: (5.818046595s)
--- PASS: TestAddons/parallel/Yakd (11.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-095355
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-095355: (12.024002045s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-095355
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-095355
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-095355
--- PASS: TestAddons/StoppedEnableDisable (12.30s)

                                                
                                    
x
+
TestCertOptions (42.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-116071 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-116071 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (39.521010652s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-116071 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-116071 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-116071 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-116071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-116071
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-116071: (1.955321697s)
--- PASS: TestCertOptions (42.10s)

                                                
                                    
x
+
TestCertExpiration (232.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-444928 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-444928 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.524496968s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-444928 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-444928 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.036601387s)
helpers_test.go:175: Cleaning up "cert-expiration-444928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-444928
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-444928: (2.613569664s)
--- PASS: TestCertExpiration (232.18s)

                                                
                                    
x
+
TestForceSystemdFlag (42.84s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-407306 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-407306 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.367416651s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-407306 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-407306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-407306
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-407306: (2.175721592s)
--- PASS: TestForceSystemdFlag (42.84s)

                                                
                                    
x
+
TestForceSystemdEnv (37.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-849258 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-849258 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.074117091s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-849258 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-849258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-849258
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-849258: (2.89647414s)
--- PASS: TestForceSystemdEnv (37.40s)

                                                
                                    
x
+
TestDockerEnvContainerd (42.82s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-338767 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-338767 --driver=docker  --container-runtime=containerd: (27.319658028s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-338767"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-wEHXW3MyMxmi/agent.1053315" SSH_AGENT_PID="1053316" DOCKER_HOST=ssh://docker@127.0.0.1:41457 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-wEHXW3MyMxmi/agent.1053315" SSH_AGENT_PID="1053316" DOCKER_HOST=ssh://docker@127.0.0.1:41457 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-wEHXW3MyMxmi/agent.1053315" SSH_AGENT_PID="1053316" DOCKER_HOST=ssh://docker@127.0.0.1:41457 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.136236635s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-wEHXW3MyMxmi/agent.1053315" SSH_AGENT_PID="1053316" DOCKER_HOST=ssh://docker@127.0.0.1:41457 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-338767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-338767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-338767: (1.958422241s)
--- PASS: TestDockerEnvContainerd (42.82s)

                                                
                                    
x
+
TestErrorSpam/setup (29.28s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-175670 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-175670 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-175670 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-175670 --driver=docker  --container-runtime=containerd: (29.280898141s)
--- PASS: TestErrorSpam/setup (29.28s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 stop: (1.28734599s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-175670 --log_dir /tmp/nospam-175670 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19690-1028234/.minikube/files/etc/test/nested/copy/1033616/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-006225 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-006225 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (52.234882874s)
--- PASS: TestFunctional/serial/StartWithProxy (52.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 13:35:39.431439 1033616 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-006225 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-006225 --alsologtostderr -v=8: (6.502972017s)
functional_test.go:663: soft start took 6.506658819s for "functional-006225" cluster.
I0923 13:35:45.934742 1033616 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-006225 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-006225 cache add registry.k8s.io/pause:3.1: (1.503422347s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-006225 cache add registry.k8s.io/pause:3.3: (1.414234507s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-006225 cache add registry.k8s.io/pause:latest: (1.241598107s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-006225 /tmp/TestFunctionalserialCacheCmdcacheadd_local2548631137/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 cache add minikube-local-cache-test:functional-006225
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 cache delete minikube-local-cache-test:functional-006225
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-006225
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-006225 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (281.231753ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-006225 cache reload: (1.121150358s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 kubectl -- --context functional-006225 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-006225 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-006225 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-006225 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.306897964s)
functional_test.go:761: restart took 45.307015599s for "functional-006225" cluster.
I0923 13:36:39.714311 1033616 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (45.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-006225 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-006225 logs: (1.727177382s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 logs --file /tmp/TestFunctionalserialLogsFileCmd1173112443/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-006225 logs --file /tmp/TestFunctionalserialLogsFileCmd1173112443/001/logs.txt: (1.74269591s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (6.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-006225 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-006225
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-006225: exit status 115 (657.931563ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31884 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-006225 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-006225 delete -f testdata/invalidsvc.yaml: (2.142654688s)
--- PASS: TestFunctional/serial/InvalidService (6.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-006225 config get cpus: exit status 14 (82.11942ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-006225 config get cpus: exit status 14 (67.073068ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-006225 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-006225 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1067753: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-006225 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-006225 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (162.374926ms)

                                                
                                                
-- stdout --
	* [functional-006225] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:37:21.276904 1067458 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:37:21.277191 1067458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:37:21.277222 1067458 out.go:358] Setting ErrFile to fd 2...
	I0923 13:37:21.277242 1067458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:37:21.277625 1067458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	I0923 13:37:21.278103 1067458 out.go:352] Setting JSON to false
	I0923 13:37:21.279390 1067458 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":155988,"bootTime":1726942654,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0923 13:37:21.279491 1067458 start.go:139] virtualization:  
	I0923 13:37:21.281337 1067458 out.go:177] * [functional-006225] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:37:21.282940 1067458 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:37:21.283078 1067458 notify.go:220] Checking for updates...
	I0923 13:37:21.285617 1067458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:37:21.286877 1067458 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 13:37:21.288199 1067458 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	I0923 13:37:21.289208 1067458 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:37:21.290184 1067458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:37:21.291745 1067458 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 13:37:21.292442 1067458 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:37:21.315567 1067458 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:37:21.315689 1067458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:37:21.378366 1067458 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:37:21.366873518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:37:21.378523 1067458 docker.go:318] overlay module found
	I0923 13:37:21.379852 1067458 out.go:177] * Using the docker driver based on existing profile
	I0923 13:37:21.381051 1067458 start.go:297] selected driver: docker
	I0923 13:37:21.381069 1067458 start.go:901] validating driver "docker" against &{Name:functional-006225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-006225 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:37:21.381160 1067458 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:37:21.383021 1067458 out.go:201] 
	W0923 13:37:21.384032 1067458 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 13:37:21.385240 1067458 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-006225 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-006225 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-006225 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (216.232913ms)

                                                
                                                
-- stdout --
	* [functional-006225] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:37:21.075082 1067352 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:37:21.075357 1067352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:37:21.075371 1067352 out.go:358] Setting ErrFile to fd 2...
	I0923 13:37:21.075378 1067352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:37:21.076385 1067352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	I0923 13:37:21.076780 1067352 out.go:352] Setting JSON to false
	I0923 13:37:21.077828 1067352 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":155987,"bootTime":1726942654,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0923 13:37:21.077915 1067352 start.go:139] virtualization:  
	I0923 13:37:21.080250 1067352 out.go:177] * [functional-006225] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0923 13:37:21.081737 1067352 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:37:21.081918 1067352 notify.go:220] Checking for updates...
	I0923 13:37:21.086701 1067352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:37:21.088216 1067352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 13:37:21.089758 1067352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	I0923 13:37:21.091001 1067352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:37:21.093003 1067352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:37:21.094884 1067352 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 13:37:21.095450 1067352 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:37:21.124697 1067352 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:37:21.125120 1067352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:37:21.213863 1067352 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:37:21.198935811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:37:21.213972 1067352 docker.go:318] overlay module found
	I0923 13:37:21.215783 1067352 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 13:37:21.217151 1067352 start.go:297] selected driver: docker
	I0923 13:37:21.217170 1067352 start.go:901] validating driver "docker" against &{Name:functional-006225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-006225 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:37:21.217301 1067352 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:37:21.219276 1067352 out.go:201] 
	W0923 13:37:21.220884 1067352 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 13:37:21.222124 1067352 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-006225 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-006225 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-5wzsk" [cdc13b26-f4b7-40ad-98a7-d8ec6a61f1b2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-5wzsk" [cdc13b26-f4b7-40ad-98a7-d8ec6a61f1b2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00365523s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31426
functional_test.go:1675: http://192.168.49.2:31426: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-5wzsk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31426
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [88f4f167-b406-4103-8eae-51117cbfa240] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004343106s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-006225 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-006225 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-006225 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-006225 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dfad1b24-3103-45d1-9a22-5f5d12f5d186] Pending
helpers_test.go:344: "sp-pod" [dfad1b24-3103-45d1-9a22-5f5d12f5d186] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dfad1b24-3103-45d1-9a22-5f5d12f5d186] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003789785s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-006225 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-006225 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-006225 delete -f testdata/storage-provisioner/pod.yaml: (1.62500913s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-006225 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [373af421-6222-4302-b256-ea2b96967953] Pending
helpers_test.go:344: "sp-pod" [373af421-6222-4302-b256-ea2b96967953] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [373af421-6222-4302-b256-ea2b96967953] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003145354s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-006225 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh -n functional-006225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 cp functional-006225:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2781982791/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh -n functional-006225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh -n functional-006225 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1033616/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "sudo cat /etc/test/nested/copy/1033616/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1033616.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "sudo cat /etc/ssl/certs/1033616.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1033616.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "sudo cat /usr/share/ca-certificates/1033616.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/10336162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "sudo cat /etc/ssl/certs/10336162.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/10336162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "sudo cat /usr/share/ca-certificates/10336162.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-006225 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-006225 ssh "sudo systemctl is-active docker": exit status 1 (258.767765ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-006225 ssh "sudo systemctl is-active crio": exit status 1 (271.968088ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-006225 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-006225 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-006225 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-006225 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1064963: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-006225 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-006225 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [63fe07f4-0aeb-4aa6-8290-3800ff270bf0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [63fe07f4-0aeb-4aa6-8290-3800ff270bf0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003224934s
I0923 13:36:59.646134 1033616 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-006225 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.232.18 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-006225 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-006225 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-006225 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-vkh7g" [f913e3dc-5ebe-4d20-848f-712573b162e8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-vkh7g" [f913e3dc-5ebe-4d20-848f-712573b162e8] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003741579s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "547.330281ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "54.620113ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "419.375441ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "75.003499ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 service list -o json
functional_test.go:1494: Took "610.16302ms" to run "out/minikube-linux-arm64 -p functional-006225 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-006225 /tmp/TestFunctionalparallelMountCmdany-port3641564736/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727098638227733314" to /tmp/TestFunctionalparallelMountCmdany-port3641564736/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727098638227733314" to /tmp/TestFunctionalparallelMountCmdany-port3641564736/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727098638227733314" to /tmp/TestFunctionalparallelMountCmdany-port3641564736/001/test-1727098638227733314
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-006225 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (372.86824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 13:37:18.600879 1033616 retry.go:31] will retry after 712.657221ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 13:37 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 13:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 13:37 test-1727098638227733314
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh cat /mount-9p/test-1727098638227733314
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-006225 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9a690ed9-3ae8-46a2-b7dd-e928f2f19195] Pending
helpers_test.go:344: "busybox-mount" [9a690ed9-3ae8-46a2-b7dd-e928f2f19195] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9a690ed9-3ae8-46a2-b7dd-e928f2f19195] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9a690ed9-3ae8-46a2-b7dd-e928f2f19195] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004001096s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-006225 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-006225 /tmp/TestFunctionalparallelMountCmdany-port3641564736/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31713
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31713
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-006225 /tmp/TestFunctionalparallelMountCmdspecific-port2863805814/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-006225 /tmp/TestFunctionalparallelMountCmdspecific-port2863805814/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-006225 ssh "sudo umount -f /mount-9p": exit status 1 (314.817199ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-006225 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-006225 /tmp/TestFunctionalparallelMountCmdspecific-port2863805814/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-006225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1028966211/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-006225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1028966211/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-006225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1028966211/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-006225 ssh "findmnt -T" /mount1: exit status 1 (912.46631ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 13:37:28.239730 1033616 retry.go:31] will retry after 408.928106ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-006225 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-006225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1028966211/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-006225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1028966211/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-006225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1028966211/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-006225 version -o=json --components: (1.316801951s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-006225 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-006225
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-006225
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-006225 image ls --format short --alsologtostderr:
I0923 13:37:38.344474 1070618 out.go:345] Setting OutFile to fd 1 ...
I0923 13:37:38.344685 1070618 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:37:38.344698 1070618 out.go:358] Setting ErrFile to fd 2...
I0923 13:37:38.344704 1070618 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:37:38.344998 1070618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
I0923 13:37:38.345743 1070618 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 13:37:38.345905 1070618 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 13:37:38.346467 1070618 cli_runner.go:164] Run: docker container inspect functional-006225 --format={{.State.Status}}
I0923 13:37:38.369035 1070618 ssh_runner.go:195] Run: systemctl --version
I0923 13:37:38.369121 1070618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-006225
I0923 13:37:38.392979 1070618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41467 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/functional-006225/id_rsa Username:docker}
I0923 13:37:38.487648 1070618 ssh_runner.go:195] Run: sudo crictl images --output json
W0923 13:37:38.535996 1070618 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 5dbd8b43-fc80-4f1a-833e-d3c794b41817
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-006225 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-006225  | sha256:fba522 | 991B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kicbase/echo-server               | functional-006225  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-006225 image ls --format table --alsologtostderr:
I0923 13:37:38.602176 1070684 out.go:345] Setting OutFile to fd 1 ...
I0923 13:37:38.602386 1070684 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:37:38.602414 1070684 out.go:358] Setting ErrFile to fd 2...
I0923 13:37:38.602436 1070684 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:37:38.602710 1070684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
I0923 13:37:38.603426 1070684 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 13:37:38.605480 1070684 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 13:37:38.606066 1070684 cli_runner.go:164] Run: docker container inspect functional-006225 --format={{.State.Status}}
I0923 13:37:38.630393 1070684 ssh_runner.go:195] Run: systemctl --version
I0923 13:37:38.630447 1070684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-006225
I0923 13:37:38.652094 1070684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41467 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/functional-006225/id_rsa Username:docker}
I0923 13:37:38.744168 1070684 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-006225 image ls --format json --alsologtostderr:
[{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicba
se/echo-server:functional-006225"],"size":"2173567"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d8
2a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:1611
cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"si
ze":"25687130"},{"id":"sha256:fba522790ade5106e53ce21e335bd3ef155d6e53d84c6147a11d335e0dcabdd7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-006225"],"size":"991"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-006225 image ls --format json --alsologtostderr:
I0923 13:37:38.612741 1070683 out.go:345] Setting OutFile to fd 1 ...
I0923 13:37:38.612930 1070683 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:37:38.612956 1070683 out.go:358] Setting ErrFile to fd 2...
I0923 13:37:38.612977 1070683 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:37:38.613253 1070683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
I0923 13:37:38.613956 1070683 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 13:37:38.614163 1070683 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 13:37:38.614700 1070683 cli_runner.go:164] Run: docker container inspect functional-006225 --format={{.State.Status}}
I0923 13:37:38.634984 1070683 ssh_runner.go:195] Run: systemctl --version
I0923 13:37:38.635050 1070683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-006225
I0923 13:37:38.654029 1070683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41467 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/functional-006225/id_rsa Username:docker}
I0923 13:37:38.756282 1070683 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-006225 image ls --format yaml --alsologtostderr:
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:fba522790ade5106e53ce21e335bd3ef155d6e53d84c6147a11d335e0dcabdd7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-006225
size: "991"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-006225
size: "2173567"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-006225 image ls --format yaml --alsologtostderr:
I0923 13:37:38.339499 1070619 out.go:345] Setting OutFile to fd 1 ...
I0923 13:37:38.339723 1070619 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:37:38.339756 1070619 out.go:358] Setting ErrFile to fd 2...
I0923 13:37:38.339781 1070619 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:37:38.340053 1070619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
I0923 13:37:38.340740 1070619 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 13:37:38.340910 1070619 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 13:37:38.341428 1070619 cli_runner.go:164] Run: docker container inspect functional-006225 --format={{.State.Status}}
I0923 13:37:38.362314 1070619 ssh_runner.go:195] Run: systemctl --version
I0923 13:37:38.362368 1070619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-006225
I0923 13:37:38.384885 1070619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41467 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/functional-006225/id_rsa Username:docker}
I0923 13:37:38.479770 1070619 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-006225 ssh pgrep buildkitd: exit status 1 (286.602671ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image build -t localhost/my-image:functional-006225 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-006225 image build -t localhost/my-image:functional-006225 testdata/build --alsologtostderr: (3.024830773s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-006225 image build -t localhost/my-image:functional-006225 testdata/build --alsologtostderr:
I0923 13:37:39.141691 1070807 out.go:345] Setting OutFile to fd 1 ...
I0923 13:37:39.142915 1070807 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:37:39.142933 1070807 out.go:358] Setting ErrFile to fd 2...
I0923 13:37:39.142940 1070807 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:37:39.143219 1070807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
I0923 13:37:39.144092 1070807 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 13:37:39.146273 1070807 config.go:182] Loaded profile config "functional-006225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 13:37:39.146838 1070807 cli_runner.go:164] Run: docker container inspect functional-006225 --format={{.State.Status}}
I0923 13:37:39.164804 1070807 ssh_runner.go:195] Run: systemctl --version
I0923 13:37:39.164862 1070807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-006225
I0923 13:37:39.182193 1070807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41467 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/functional-006225/id_rsa Username:docker}
I0923 13:37:39.279768 1070807 build_images.go:161] Building image from path: /tmp/build.2100397931.tar
I0923 13:37:39.279847 1070807 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 13:37:39.289019 1070807 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2100397931.tar
I0923 13:37:39.292394 1070807 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2100397931.tar: stat -c "%s %y" /var/lib/minikube/build/build.2100397931.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2100397931.tar': No such file or directory
I0923 13:37:39.292431 1070807 ssh_runner.go:362] scp /tmp/build.2100397931.tar --> /var/lib/minikube/build/build.2100397931.tar (3072 bytes)
I0923 13:37:39.318368 1070807 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2100397931
I0923 13:37:39.327129 1070807 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2100397931 -xf /var/lib/minikube/build/build.2100397931.tar
I0923 13:37:39.336577 1070807 containerd.go:394] Building image: /var/lib/minikube/build/build.2100397931
I0923 13:37:39.336650 1070807 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2100397931 --local dockerfile=/var/lib/minikube/build/build.2100397931 --output type=image,name=localhost/my-image:functional-006225
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:365c58c834c6d0477994aaf5a8107c7633a9d8bb8a47b1d18127d696547df450 0.0s done
#8 exporting config sha256:9acf0e50385d7a26380c2756d319e71792c8dfe7a3845a3904b34bc0745e1b73 0.0s done
#8 naming to localhost/my-image:functional-006225 done
#8 DONE 0.2s
I0923 13:37:42.078103 1070807 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2100397931 --local dockerfile=/var/lib/minikube/build/build.2100397931 --output type=image,name=localhost/my-image:functional-006225: (2.741417182s)
I0923 13:37:42.078218 1070807 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2100397931
I0923 13:37:42.089753 1070807 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2100397931.tar
I0923 13:37:42.105310 1070807 build_images.go:217] Built localhost/my-image:functional-006225 from /tmp/build.2100397931.tar
I0923 13:37:42.105343 1070807 build_images.go:133] succeeded building to: functional-006225
I0923 13:37:42.105349 1070807 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/09/23 13:37:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-006225
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image load --daemon kicbase/echo-server:functional-006225 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-006225 image load --daemon kicbase/echo-server:functional-006225 --alsologtostderr: (1.195300391s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image load --daemon kicbase/echo-server:functional-006225 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-006225 image load --daemon kicbase/echo-server:functional-006225 --alsologtostderr: (1.027915947s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-006225
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image load --daemon kicbase/echo-server:functional-006225 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-006225 image load --daemon kicbase/echo-server:functional-006225 --alsologtostderr: (1.077376111s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image save kicbase/echo-server:functional-006225 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image rm kicbase/echo-server:functional-006225 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-006225
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-006225 image save --daemon kicbase/echo-server:functional-006225 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-006225
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-006225
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-006225
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-006225
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (124.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-555826 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0923 13:37:45.564229 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:37:48.125560 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:37:53.246878 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:38:03.488279 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:38:23.969705 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:39:04.931904 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-555826 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m3.383996924s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (124.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-555826 -- rollout status deployment/busybox: (28.455390102s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-2rsrp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-9mt79 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-c9p2c -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-2rsrp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-9mt79 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-c9p2c -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-2rsrp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-9mt79 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-c9p2c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-2rsrp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-2rsrp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-9mt79 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-9mt79 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-c9p2c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-555826 -- exec busybox-7dff88458-c9p2c -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-555826 -v=7 --alsologtostderr
E0923 13:40:26.855821 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-555826 -v=7 --alsologtostderr: (23.193523184s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-555826 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.080867247s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-555826 status --output json -v=7 --alsologtostderr: (1.034604839s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp testdata/cp-test.txt ha-555826:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1132939905/001/cp-test_ha-555826.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826:/home/docker/cp-test.txt ha-555826-m02:/home/docker/cp-test_ha-555826_ha-555826-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m02 "sudo cat /home/docker/cp-test_ha-555826_ha-555826-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826:/home/docker/cp-test.txt ha-555826-m03:/home/docker/cp-test_ha-555826_ha-555826-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m03 "sudo cat /home/docker/cp-test_ha-555826_ha-555826-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826:/home/docker/cp-test.txt ha-555826-m04:/home/docker/cp-test_ha-555826_ha-555826-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m04 "sudo cat /home/docker/cp-test_ha-555826_ha-555826-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp testdata/cp-test.txt ha-555826-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1132939905/001/cp-test_ha-555826-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m02:/home/docker/cp-test.txt ha-555826:/home/docker/cp-test_ha-555826-m02_ha-555826.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826 "sudo cat /home/docker/cp-test_ha-555826-m02_ha-555826.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m02:/home/docker/cp-test.txt ha-555826-m03:/home/docker/cp-test_ha-555826-m02_ha-555826-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m03 "sudo cat /home/docker/cp-test_ha-555826-m02_ha-555826-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m02:/home/docker/cp-test.txt ha-555826-m04:/home/docker/cp-test_ha-555826-m02_ha-555826-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m04 "sudo cat /home/docker/cp-test_ha-555826-m02_ha-555826-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp testdata/cp-test.txt ha-555826-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1132939905/001/cp-test_ha-555826-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m03:/home/docker/cp-test.txt ha-555826:/home/docker/cp-test_ha-555826-m03_ha-555826.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826 "sudo cat /home/docker/cp-test_ha-555826-m03_ha-555826.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m03:/home/docker/cp-test.txt ha-555826-m02:/home/docker/cp-test_ha-555826-m03_ha-555826-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m02 "sudo cat /home/docker/cp-test_ha-555826-m03_ha-555826-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m03:/home/docker/cp-test.txt ha-555826-m04:/home/docker/cp-test_ha-555826-m03_ha-555826-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m04 "sudo cat /home/docker/cp-test_ha-555826-m03_ha-555826-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp testdata/cp-test.txt ha-555826-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1132939905/001/cp-test_ha-555826-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m04:/home/docker/cp-test.txt ha-555826:/home/docker/cp-test_ha-555826-m04_ha-555826.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826 "sudo cat /home/docker/cp-test_ha-555826-m04_ha-555826.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m04:/home/docker/cp-test.txt ha-555826-m02:/home/docker/cp-test_ha-555826-m04_ha-555826-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m02 "sudo cat /home/docker/cp-test_ha-555826-m04_ha-555826-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 cp ha-555826-m04:/home/docker/cp-test.txt ha-555826-m03:/home/docker/cp-test_ha-555826-m04_ha-555826-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 ssh -n ha-555826-m03 "sudo cat /home/docker/cp-test_ha-555826-m04_ha-555826-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-555826 node stop m02 -v=7 --alsologtostderr: (12.122644614s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-555826 status -v=7 --alsologtostderr: exit status 7 (783.1327ms)

                                                
                                                
-- stdout --
	ha-555826
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-555826-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-555826-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-555826-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:41:19.535854 1087047 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:41:19.535970 1087047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:41:19.535979 1087047 out.go:358] Setting ErrFile to fd 2...
	I0923 13:41:19.535985 1087047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:41:19.536238 1087047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	I0923 13:41:19.536420 1087047 out.go:352] Setting JSON to false
	I0923 13:41:19.536454 1087047 mustload.go:65] Loading cluster: ha-555826
	I0923 13:41:19.536524 1087047 notify.go:220] Checking for updates...
	I0923 13:41:19.536998 1087047 config.go:182] Loaded profile config "ha-555826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 13:41:19.537020 1087047 status.go:174] checking status of ha-555826 ...
	I0923 13:41:19.537632 1087047 cli_runner.go:164] Run: docker container inspect ha-555826 --format={{.State.Status}}
	I0923 13:41:19.559777 1087047 status.go:364] ha-555826 host status = "Running" (err=<nil>)
	I0923 13:41:19.559805 1087047 host.go:66] Checking if "ha-555826" exists ...
	I0923 13:41:19.560152 1087047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-555826
	I0923 13:41:19.593070 1087047 host.go:66] Checking if "ha-555826" exists ...
	I0923 13:41:19.593385 1087047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:41:19.593422 1087047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-555826
	I0923 13:41:19.633708 1087047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41472 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/ha-555826/id_rsa Username:docker}
	I0923 13:41:19.732740 1087047 ssh_runner.go:195] Run: systemctl --version
	I0923 13:41:19.737518 1087047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:41:19.750743 1087047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:41:19.804842 1087047 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-23 13:41:19.794614656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:41:19.805467 1087047 kubeconfig.go:125] found "ha-555826" server: "https://192.168.49.254:8443"
	I0923 13:41:19.805504 1087047 api_server.go:166] Checking apiserver status ...
	I0923 13:41:19.805551 1087047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:41:19.817684 1087047 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup
	I0923 13:41:19.827496 1087047 api_server.go:182] apiserver freezer: "11:freezer:/docker/e00006af8d5f6f02127c885526933a43daf9e395e1461eb1e7e9f1c594115501/kubepods/burstable/pod937952c352d28f50e948e2dc54dee085/b0530cc0caec65c1c25b3b3fc02db02724cd2f0e6e25336f74fc4e5c47d865b2"
	I0923 13:41:19.827585 1087047 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e00006af8d5f6f02127c885526933a43daf9e395e1461eb1e7e9f1c594115501/kubepods/burstable/pod937952c352d28f50e948e2dc54dee085/b0530cc0caec65c1c25b3b3fc02db02724cd2f0e6e25336f74fc4e5c47d865b2/freezer.state
	I0923 13:41:19.837033 1087047 api_server.go:204] freezer state: "THAWED"
	I0923 13:41:19.837058 1087047 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 13:41:19.845310 1087047 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 13:41:19.845337 1087047 status.go:456] ha-555826 apiserver status = Running (err=<nil>)
	I0923 13:41:19.845349 1087047 status.go:176] ha-555826 status: &{Name:ha-555826 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:41:19.845365 1087047 status.go:174] checking status of ha-555826-m02 ...
	I0923 13:41:19.845681 1087047 cli_runner.go:164] Run: docker container inspect ha-555826-m02 --format={{.State.Status}}
	I0923 13:41:19.862745 1087047 status.go:364] ha-555826-m02 host status = "Stopped" (err=<nil>)
	I0923 13:41:19.862818 1087047 status.go:377] host is not running, skipping remaining checks
	I0923 13:41:19.862840 1087047 status.go:176] ha-555826-m02 status: &{Name:ha-555826-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:41:19.862868 1087047 status.go:174] checking status of ha-555826-m03 ...
	I0923 13:41:19.863191 1087047 cli_runner.go:164] Run: docker container inspect ha-555826-m03 --format={{.State.Status}}
	I0923 13:41:19.894961 1087047 status.go:364] ha-555826-m03 host status = "Running" (err=<nil>)
	I0923 13:41:19.894988 1087047 host.go:66] Checking if "ha-555826-m03" exists ...
	I0923 13:41:19.895311 1087047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-555826-m03
	I0923 13:41:19.913326 1087047 host.go:66] Checking if "ha-555826-m03" exists ...
	I0923 13:41:19.913653 1087047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:41:19.913704 1087047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-555826-m03
	I0923 13:41:19.932352 1087047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41482 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/ha-555826-m03/id_rsa Username:docker}
	I0923 13:41:20.025132 1087047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:41:20.039019 1087047 kubeconfig.go:125] found "ha-555826" server: "https://192.168.49.254:8443"
	I0923 13:41:20.039052 1087047 api_server.go:166] Checking apiserver status ...
	I0923 13:41:20.039099 1087047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:41:20.050794 1087047 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup
	I0923 13:41:20.070609 1087047 api_server.go:182] apiserver freezer: "11:freezer:/docker/fbdde8d365d4a5f0910e71ae2e622f5f06cff3d367651aaa3b6e43f073d6109a/kubepods/burstable/podcc8fb2b8c159bfc1969f809bc8d7773e/0cf36d48096820d0e2d42d9442b0b0ce9972ef9aa0e239ce1b410cf4911996b0"
	I0923 13:41:20.070719 1087047 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fbdde8d365d4a5f0910e71ae2e622f5f06cff3d367651aaa3b6e43f073d6109a/kubepods/burstable/podcc8fb2b8c159bfc1969f809bc8d7773e/0cf36d48096820d0e2d42d9442b0b0ce9972ef9aa0e239ce1b410cf4911996b0/freezer.state
	I0923 13:41:20.080500 1087047 api_server.go:204] freezer state: "THAWED"
	I0923 13:41:20.080535 1087047 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 13:41:20.088962 1087047 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 13:41:20.088995 1087047 status.go:456] ha-555826-m03 apiserver status = Running (err=<nil>)
	I0923 13:41:20.089006 1087047 status.go:176] ha-555826-m03 status: &{Name:ha-555826-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:41:20.089025 1087047 status.go:174] checking status of ha-555826-m04 ...
	I0923 13:41:20.089377 1087047 cli_runner.go:164] Run: docker container inspect ha-555826-m04 --format={{.State.Status}}
	I0923 13:41:20.108173 1087047 status.go:364] ha-555826-m04 host status = "Running" (err=<nil>)
	I0923 13:41:20.108201 1087047 host.go:66] Checking if "ha-555826-m04" exists ...
	I0923 13:41:20.108538 1087047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-555826-m04
	I0923 13:41:20.131146 1087047 host.go:66] Checking if "ha-555826-m04" exists ...
	I0923 13:41:20.131634 1087047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:41:20.131722 1087047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-555826-m04
	I0923 13:41:20.153844 1087047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41487 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/ha-555826-m04/id_rsa Username:docker}
	I0923 13:41:20.248392 1087047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:41:20.260260 1087047 status.go:176] ha-555826-m04 status: &{Name:ha-555826-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-555826 node start m02 -v=7 --alsologtostderr: (17.23854708s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.056785701s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (122.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-555826 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-555826 -v=7 --alsologtostderr
E0923 13:41:50.070426 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:41:50.076875 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:41:50.088359 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:41:50.109778 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:41:50.151118 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:41:50.232515 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:41:50.393883 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:41:50.715214 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:41:51.356983 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:41:52.638364 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:41:55.200522 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:42:00.322895 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-555826 -v=7 --alsologtostderr: (26.399029269s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-555826 --wait=true -v=7 --alsologtostderr
E0923 13:42:10.564702 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:42:31.046559 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:42:42.993007 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:43:10.697558 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:43:12.008861 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-555826 --wait=true -v=7 --alsologtostderr: (1m35.795379698s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-555826
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (122.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-555826 node delete m03 -v=7 --alsologtostderr: (9.769531891s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-555826 stop -v=7 --alsologtostderr: (36.019969029s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-555826 status -v=7 --alsologtostderr: exit status 7 (106.891228ms)

                                                
                                                
-- stdout --
	ha-555826
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-555826-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-555826-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:44:30.354352 1100876 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:44:30.354506 1100876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:44:30.354518 1100876 out.go:358] Setting ErrFile to fd 2...
	I0923 13:44:30.354524 1100876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:44:30.354755 1100876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	I0923 13:44:30.354936 1100876 out.go:352] Setting JSON to false
	I0923 13:44:30.354980 1100876 mustload.go:65] Loading cluster: ha-555826
	I0923 13:44:30.355066 1100876 notify.go:220] Checking for updates...
	I0923 13:44:30.356059 1100876 config.go:182] Loaded profile config "ha-555826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 13:44:30.356088 1100876 status.go:174] checking status of ha-555826 ...
	I0923 13:44:30.356630 1100876 cli_runner.go:164] Run: docker container inspect ha-555826 --format={{.State.Status}}
	I0923 13:44:30.373942 1100876 status.go:364] ha-555826 host status = "Stopped" (err=<nil>)
	I0923 13:44:30.373965 1100876 status.go:377] host is not running, skipping remaining checks
	I0923 13:44:30.373972 1100876 status.go:176] ha-555826 status: &{Name:ha-555826 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:44:30.374004 1100876 status.go:174] checking status of ha-555826-m02 ...
	I0923 13:44:30.374314 1100876 cli_runner.go:164] Run: docker container inspect ha-555826-m02 --format={{.State.Status}}
	I0923 13:44:30.392963 1100876 status.go:364] ha-555826-m02 host status = "Stopped" (err=<nil>)
	I0923 13:44:30.392984 1100876 status.go:377] host is not running, skipping remaining checks
	I0923 13:44:30.392991 1100876 status.go:176] ha-555826-m02 status: &{Name:ha-555826-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:44:30.393009 1100876 status.go:174] checking status of ha-555826-m04 ...
	I0923 13:44:30.393307 1100876 cli_runner.go:164] Run: docker container inspect ha-555826-m04 --format={{.State.Status}}
	I0923 13:44:30.414650 1100876 status.go:364] ha-555826-m04 host status = "Stopped" (err=<nil>)
	I0923 13:44:30.414670 1100876 status.go:377] host is not running, skipping remaining checks
	I0923 13:44:30.414679 1100876 status.go:176] ha-555826-m04 status: &{Name:ha-555826-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (76.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-555826 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0923 13:44:33.930385 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-555826 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.595587241s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (76.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-555826 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-555826 --control-plane -v=7 --alsologtostderr: (43.74760769s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-555826 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-555826 status -v=7 --alsologtostderr: (1.015044618s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-814040 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0923 13:46:50.070027 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:47:17.771814 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:47:42.992799 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-814040 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m24.789145938s)
--- PASS: TestJSONOutput/start/Command (84.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-814040 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-814040 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-814040 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-814040 --output=json --user=testUser: (5.761232713s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-050531 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-050531 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (81.657669ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6a007668-3aab-451e-8ab9-b4212ddab733","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-050531] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"184a7e8b-d8da-4d82-ab16-92a2798ae0e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"4dfdbf99-2811-4f75-a109-92e9d06db204","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"41d4d456-b4e1-444a-bdda-48a3ea27165c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig"}}
	{"specversion":"1.0","id":"65017922-38e5-40a2-8d83-03539ef38f53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube"}}
	{"specversion":"1.0","id":"261b34cb-2c8f-423c-9f61-122aee24712d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a6948c60-6afc-4d69-89f5-9a5dc6ee20c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1147986e-7f71-4e7b-bb77-9876908d46b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-050531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-050531
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-550567 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-550567 --network=: (38.578504889s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-550567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-550567
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-550567: (2.073992444s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.68s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-153907 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-153907 --network=bridge: (34.580577339s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-153907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-153907
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-153907: (2.016436531s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.62s)

                                                
                                    
x
+
TestKicExistingNetwork (32.75s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0923 13:49:35.639451 1033616 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 13:49:35.654871 1033616 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 13:49:35.655461 1033616 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0923 13:49:35.656294 1033616 cli_runner.go:164] Run: docker network inspect existing-network
W0923 13:49:35.672334 1033616 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0923 13:49:35.672362 1033616 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0923 13:49:35.672378 1033616 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0923 13:49:35.672504 1033616 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 13:49:35.688956 1033616 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-40564a1c2688 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:99:53:d8:16} reservation:<nil>}
I0923 13:49:35.693797 1033616 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0923 13:49:35.694254 1033616 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195d8e0}
I0923 13:49:35.694282 1033616 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0923 13:49:35.694341 1033616 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0923 13:49:35.762915 1033616 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-938140 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-938140 --network=existing-network: (30.627445862s)
helpers_test.go:175: Cleaning up "existing-network-938140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-938140
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-938140: (1.966368895s)
I0923 13:50:08.374178 1033616 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.75s)

                                                
                                    
x
+
TestKicCustomSubnet (33.34s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-924232 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-924232 --subnet=192.168.60.0/24: (31.186562393s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-924232 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-924232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-924232
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-924232: (2.129348386s)
--- PASS: TestKicCustomSubnet (33.34s)

                                                
                                    
x
+
TestKicStaticIP (34.53s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-709710 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-709710 --static-ip=192.168.200.200: (32.303608693s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-709710 ip
helpers_test.go:175: Cleaning up "static-ip-709710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-709710
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-709710: (2.074113107s)
--- PASS: TestKicStaticIP (34.53s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-036175 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-036175 --driver=docker  --container-runtime=containerd: (32.136282617s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-038890 --driver=docker  --container-runtime=containerd
E0923 13:51:50.069889 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-038890 --driver=docker  --container-runtime=containerd: (30.684700776s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-036175
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-038890
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-038890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-038890
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-038890: (2.036162391s)
helpers_test.go:175: Cleaning up "first-036175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-036175
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-036175: (2.261658572s)
--- PASS: TestMinikubeProfile (68.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-216634 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-216634 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.402047742s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-216634 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-218439 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-218439 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.988581225s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-218439 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-216634 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-216634 --alsologtostderr -v=5: (1.645926008s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-218439 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-218439
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-218439: (1.204030371s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-218439
E0923 13:52:42.993427 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-218439: (6.358463486s)
--- PASS: TestMountStart/serial/RestartStopped (7.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-218439 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (93.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812356 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0923 13:54:06.059023 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812356 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m32.703694449s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (93.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-812356 -- rollout status deployment/busybox: (15.825294327s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- exec busybox-7dff88458-2sznn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- exec busybox-7dff88458-hr5tr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- exec busybox-7dff88458-2sznn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- exec busybox-7dff88458-hr5tr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- exec busybox-7dff88458-2sznn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- exec busybox-7dff88458-hr5tr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- exec busybox-7dff88458-2sznn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- exec busybox-7dff88458-2sznn -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- exec busybox-7dff88458-hr5tr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812356 -- exec busybox-7dff88458-hr5tr -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-812356 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-812356 -v 3 --alsologtostderr: (19.316577868s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-812356 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp testdata/cp-test.txt multinode-812356:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp multinode-812356:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1428504831/001/cp-test_multinode-812356.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp multinode-812356:/home/docker/cp-test.txt multinode-812356-m02:/home/docker/cp-test_multinode-812356_multinode-812356-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m02 "sudo cat /home/docker/cp-test_multinode-812356_multinode-812356-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp multinode-812356:/home/docker/cp-test.txt multinode-812356-m03:/home/docker/cp-test_multinode-812356_multinode-812356-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m03 "sudo cat /home/docker/cp-test_multinode-812356_multinode-812356-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp testdata/cp-test.txt multinode-812356-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp multinode-812356-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1428504831/001/cp-test_multinode-812356-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp multinode-812356-m02:/home/docker/cp-test.txt multinode-812356:/home/docker/cp-test_multinode-812356-m02_multinode-812356.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356 "sudo cat /home/docker/cp-test_multinode-812356-m02_multinode-812356.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp multinode-812356-m02:/home/docker/cp-test.txt multinode-812356-m03:/home/docker/cp-test_multinode-812356-m02_multinode-812356-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m03 "sudo cat /home/docker/cp-test_multinode-812356-m02_multinode-812356-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp testdata/cp-test.txt multinode-812356-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp multinode-812356-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1428504831/001/cp-test_multinode-812356-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp multinode-812356-m03:/home/docker/cp-test.txt multinode-812356:/home/docker/cp-test_multinode-812356-m03_multinode-812356.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356 "sudo cat /home/docker/cp-test_multinode-812356-m03_multinode-812356.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 cp multinode-812356-m03:/home/docker/cp-test.txt multinode-812356-m02:/home/docker/cp-test_multinode-812356-m03_multinode-812356-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 ssh -n multinode-812356-m02 "sudo cat /home/docker/cp-test_multinode-812356-m03_multinode-812356-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-812356 node stop m03: (1.21425315s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812356 status: exit status 7 (538.526303ms)

                                                
                                                
-- stdout --
	multinode-812356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-812356-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-812356-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812356 status --alsologtostderr: exit status 7 (518.606848ms)

                                                
                                                
-- stdout --
	multinode-812356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-812356-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-812356-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:55:14.591700 1154494 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:55:14.591869 1154494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:55:14.591878 1154494 out.go:358] Setting ErrFile to fd 2...
	I0923 13:55:14.591884 1154494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:55:14.592134 1154494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	I0923 13:55:14.592408 1154494 out.go:352] Setting JSON to false
	I0923 13:55:14.592476 1154494 mustload.go:65] Loading cluster: multinode-812356
	I0923 13:55:14.592548 1154494 notify.go:220] Checking for updates...
	I0923 13:55:14.593831 1154494 config.go:182] Loaded profile config "multinode-812356": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 13:55:14.593868 1154494 status.go:174] checking status of multinode-812356 ...
	I0923 13:55:14.594651 1154494 cli_runner.go:164] Run: docker container inspect multinode-812356 --format={{.State.Status}}
	I0923 13:55:14.612335 1154494 status.go:364] multinode-812356 host status = "Running" (err=<nil>)
	I0923 13:55:14.612359 1154494 host.go:66] Checking if "multinode-812356" exists ...
	I0923 13:55:14.612689 1154494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-812356
	I0923 13:55:14.640220 1154494 host.go:66] Checking if "multinode-812356" exists ...
	I0923 13:55:14.640561 1154494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:55:14.640609 1154494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-812356
	I0923 13:55:14.660116 1154494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41592 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/multinode-812356/id_rsa Username:docker}
	I0923 13:55:14.757123 1154494 ssh_runner.go:195] Run: systemctl --version
	I0923 13:55:14.761513 1154494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:55:14.773579 1154494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:55:14.833313 1154494 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-23 13:55:14.822575359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:55:14.833926 1154494 kubeconfig.go:125] found "multinode-812356" server: "https://192.168.58.2:8443"
	I0923 13:55:14.833978 1154494 api_server.go:166] Checking apiserver status ...
	I0923 13:55:14.834027 1154494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:55:14.845799 1154494 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	I0923 13:55:14.855833 1154494 api_server.go:182] apiserver freezer: "11:freezer:/docker/efbf0bf857f67a62027c71188e95d7d84993c53119d5ab8603c935f5f3af4144/kubepods/burstable/podf68b77261be6a3699e6d4f37eec1cc96/55477a711d767a90c6bcaf8885e93c53aa329dd08472ba81e49b761d531f7186"
	I0923 13:55:14.855911 1154494 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/efbf0bf857f67a62027c71188e95d7d84993c53119d5ab8603c935f5f3af4144/kubepods/burstable/podf68b77261be6a3699e6d4f37eec1cc96/55477a711d767a90c6bcaf8885e93c53aa329dd08472ba81e49b761d531f7186/freezer.state
	I0923 13:55:14.864784 1154494 api_server.go:204] freezer state: "THAWED"
	I0923 13:55:14.864816 1154494 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0923 13:55:14.873347 1154494 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0923 13:55:14.873379 1154494 status.go:456] multinode-812356 apiserver status = Running (err=<nil>)
	I0923 13:55:14.873390 1154494 status.go:176] multinode-812356 status: &{Name:multinode-812356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:55:14.873408 1154494 status.go:174] checking status of multinode-812356-m02 ...
	I0923 13:55:14.873733 1154494 cli_runner.go:164] Run: docker container inspect multinode-812356-m02 --format={{.State.Status}}
	I0923 13:55:14.891754 1154494 status.go:364] multinode-812356-m02 host status = "Running" (err=<nil>)
	I0923 13:55:14.891784 1154494 host.go:66] Checking if "multinode-812356-m02" exists ...
	I0923 13:55:14.892324 1154494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-812356-m02
	I0923 13:55:14.909229 1154494 host.go:66] Checking if "multinode-812356-m02" exists ...
	I0923 13:55:14.909578 1154494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:55:14.909623 1154494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-812356-m02
	I0923 13:55:14.927142 1154494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41597 SSHKeyPath:/home/jenkins/minikube-integration/19690-1028234/.minikube/machines/multinode-812356-m02/id_rsa Username:docker}
	I0923 13:55:15.022620 1154494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:55:15.036826 1154494 status.go:176] multinode-812356-m02 status: &{Name:multinode-812356-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:55:15.036865 1154494 status.go:174] checking status of multinode-812356-m03 ...
	I0923 13:55:15.037221 1154494 cli_runner.go:164] Run: docker container inspect multinode-812356-m03 --format={{.State.Status}}
	I0923 13:55:15.056921 1154494 status.go:364] multinode-812356-m03 host status = "Stopped" (err=<nil>)
	I0923 13:55:15.056944 1154494 status.go:377] host is not running, skipping remaining checks
	I0923 13:55:15.056952 1154494 status.go:176] multinode-812356-m03 status: &{Name:multinode-812356-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-812356 node start m03 -v=7 --alsologtostderr: (9.184007304s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (105s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-812356
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-812356
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-812356: (25.070915524s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812356 --wait=true -v=8 --alsologtostderr
E0923 13:56:50.070144 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812356 --wait=true -v=8 --alsologtostderr: (1m19.788877093s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-812356
--- PASS: TestMultiNode/serial/RestartKeepsNodes (105.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-812356 node delete m03: (4.846718285s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-812356 stop: (23.87569307s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812356 status: exit status 7 (97.341472ms)

                                                
                                                
-- stdout --
	multinode-812356
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-812356-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812356 status --alsologtostderr: exit status 7 (90.696289ms)

                                                
                                                
-- stdout --
	multinode-812356
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-812356-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:57:39.577775 1162939 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:57:39.577908 1162939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:57:39.577925 1162939 out.go:358] Setting ErrFile to fd 2...
	I0923 13:57:39.577932 1162939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:57:39.578286 1162939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	I0923 13:57:39.578504 1162939 out.go:352] Setting JSON to false
	I0923 13:57:39.578530 1162939 mustload.go:65] Loading cluster: multinode-812356
	I0923 13:57:39.579215 1162939 config.go:182] Loaded profile config "multinode-812356": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 13:57:39.579243 1162939 status.go:174] checking status of multinode-812356 ...
	I0923 13:57:39.579419 1162939 notify.go:220] Checking for updates...
	I0923 13:57:39.580170 1162939 cli_runner.go:164] Run: docker container inspect multinode-812356 --format={{.State.Status}}
	I0923 13:57:39.596986 1162939 status.go:364] multinode-812356 host status = "Stopped" (err=<nil>)
	I0923 13:57:39.597007 1162939 status.go:377] host is not running, skipping remaining checks
	I0923 13:57:39.597014 1162939 status.go:176] multinode-812356 status: &{Name:multinode-812356 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:57:39.597042 1162939 status.go:174] checking status of multinode-812356-m02 ...
	I0923 13:57:39.597365 1162939 cli_runner.go:164] Run: docker container inspect multinode-812356-m02 --format={{.State.Status}}
	I0923 13:57:39.624168 1162939 status.go:364] multinode-812356-m02 host status = "Stopped" (err=<nil>)
	I0923 13:57:39.624196 1162939 status.go:377] host is not running, skipping remaining checks
	I0923 13:57:39.624204 1162939 status.go:176] multinode-812356-m02 status: &{Name:multinode-812356-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812356 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0923 13:57:42.992564 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:58:13.133746 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812356 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.293021645s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812356 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-812356
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812356-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-812356-m02 --driver=docker  --container-runtime=containerd: exit status 14 (93.927931ms)

                                                
                                                
-- stdout --
	* [multinode-812356-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-812356-m02' is duplicated with machine name 'multinode-812356-m02' in profile 'multinode-812356'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812356-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812356-m03 --driver=docker  --container-runtime=containerd: (34.156883652s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-812356
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-812356: exit status 80 (383.307024ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-812356 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-812356-m03 already exists in multinode-812356-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-812356-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-812356-m03: (1.984296504s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.67s)

                                                
                                    
x
+
TestPreload (111.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-940334 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-940334 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m13.196467163s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-940334 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-940334 image pull gcr.io/k8s-minikube/busybox: (2.098247462s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-940334
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-940334: (12.09147475s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-940334 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-940334 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.521993064s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-940334 image list
helpers_test.go:175: Cleaning up "test-preload-940334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-940334
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-940334: (2.494127731s)
--- PASS: TestPreload (111.80s)

                                                
                                    
x
+
TestScheduledStopUnix (107.51s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-199733 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-199733 --memory=2048 --driver=docker  --container-runtime=containerd: (30.952775201s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-199733 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-199733 -n scheduled-stop-199733
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-199733 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0923 14:01:31.421707 1033616 retry.go:31] will retry after 105.656µs: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.423428 1033616 retry.go:31] will retry after 105.001µs: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.424138 1033616 retry.go:31] will retry after 229.8µs: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.425234 1033616 retry.go:31] will retry after 321.659µs: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.426343 1033616 retry.go:31] will retry after 517.937µs: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.427490 1033616 retry.go:31] will retry after 782.674µs: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.428607 1033616 retry.go:31] will retry after 1.219255ms: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.430830 1033616 retry.go:31] will retry after 2.303903ms: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.434057 1033616 retry.go:31] will retry after 2.183098ms: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.437286 1033616 retry.go:31] will retry after 2.090419ms: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.439474 1033616 retry.go:31] will retry after 8.254259ms: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.448693 1033616 retry.go:31] will retry after 7.607389ms: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.456889 1033616 retry.go:31] will retry after 11.368089ms: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.469152 1033616 retry.go:31] will retry after 23.343863ms: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
I0923 14:01:31.495261 1033616 retry.go:31] will retry after 43.281171ms: open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/scheduled-stop-199733/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-199733 --cancel-scheduled
E0923 14:01:50.070475 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-199733 -n scheduled-stop-199733
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-199733
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-199733 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-199733
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-199733: exit status 7 (70.53483ms)

                                                
                                                
-- stdout --
	scheduled-stop-199733
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-199733 -n scheduled-stop-199733
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-199733 -n scheduled-stop-199733: exit status 7 (71.003218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-199733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-199733
E0923 14:02:42.992553 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-199733: (4.9915796s)
--- PASS: TestScheduledStopUnix (107.51s)

                                                
                                    
x
+
TestInsufficientStorage (9.9s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-381512 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-381512 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.442212328s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f69abf02-1ca8-4ef7-9090-84d6dbac8285","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-381512] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"caf5c31b-5ff9-4957-9e42-ebc7163a3973","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"7045268b-208a-4774-babb-fb31b7a13a8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a3e0e3cc-9ae7-44da-96bd-20c36288a416","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig"}}
	{"specversion":"1.0","id":"86f346aa-814e-4626-ab58-0ece8068db78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube"}}
	{"specversion":"1.0","id":"984813af-f03a-4671-b200-35ff1e6d4581","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7760ffb0-47ab-4e58-834b-bb95be1c0035","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7eeb7f86-7a71-45ce-b8ef-31e6f1b417b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"05923341-eb61-4082-ae4e-a4b78a132771","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"630ff57b-a5c3-4b9e-a4e1-1f3776551bbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b0a295b-b86d-4b49-a7b7-c4b75d4b7f9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a5fed1f4-9dc1-448b-a905-628bd2de02ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-381512\" primary control-plane node in \"insufficient-storage-381512\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"91f21f09-8230-4340-816c-22c42a28d3d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d55a268-a60e-41d5-9b78-15db682280de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e416234-4d1d-4075-b478-9927f6b87c35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-381512 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-381512 --output=json --layout=cluster: exit status 7 (288.876835ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-381512","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-381512","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 14:02:55.186640 1181616 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-381512" does not appear in /home/jenkins/minikube-integration/19690-1028234/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-381512 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-381512 --output=json --layout=cluster: exit status 7 (296.035458ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-381512","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-381512","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 14:02:55.482081 1181677 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-381512" does not appear in /home/jenkins/minikube-integration/19690-1028234/kubeconfig
	E0923 14:02:55.492546 1181677 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/insufficient-storage-381512/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-381512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-381512
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-381512: (1.867954726s)
--- PASS: TestInsufficientStorage (9.90s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3278152198 start -p running-upgrade-473851 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0923 14:10:46.061261 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3278152198 start -p running-upgrade-473851 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (52.038523835s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-473851 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0923 14:11:50.070474 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-473851 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.842993301s)
helpers_test.go:175: Cleaning up "running-upgrade-473851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-473851
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-473851: (2.768710658s)
--- PASS: TestRunningBinaryUpgrade (87.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (109.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-623881 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-623881 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.397627598s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-623881
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-623881: (1.238504002s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-623881 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-623881 status --format={{.Host}}: exit status 7 (70.39678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-623881 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-623881 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.290874765s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-623881 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-623881 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-623881 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (123.05752ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-623881] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-623881
	    minikube start -p kubernetes-upgrade-623881 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6238812 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-623881 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-623881 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-623881 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.661710817s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-623881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-623881
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-623881: (2.760616397s)
--- PASS: TestKubernetesUpgrade (109.70s)

                                                
                                    
x
+
TestMissingContainerUpgrade (149.62s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1119094355 start -p missing-upgrade-369136 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1119094355 start -p missing-upgrade-369136 --memory=2200 --driver=docker  --container-runtime=containerd: (51.106155392s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-369136
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-369136: (10.319606301s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-369136
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-369136 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-369136 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m24.908131526s)
helpers_test.go:175: Cleaning up "missing-upgrade-369136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-369136
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-369136: (2.465957476s)
--- PASS: TestMissingContainerUpgrade (149.62s)

                                                
                                    
x
+
TestPause/serial/Start (65.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-348198 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-348198 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m5.583360531s)
--- PASS: TestPause/serial/Start (65.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-047315 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-047315 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (80.820295ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-047315] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-047315 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-047315 --driver=docker  --container-runtime=containerd: (43.89026569s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-047315 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-047315 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-047315 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.025394287s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-047315 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-047315 status -o json: exit status 2 (305.611019ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-047315","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-047315
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-047315: (1.926186961s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-047315 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-047315 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.055994312s)
--- PASS: TestNoKubernetes/serial/Start (6.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-047315 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-047315 "sudo systemctl is-active --quiet service kubelet": exit status 1 (261.442204ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-047315
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-047315: (1.21994005s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-047315 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-047315 --driver=docker  --container-runtime=containerd: (7.715927823s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.72s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-348198 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-348198 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.569229728s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-047315 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-047315 "sudo systemctl is-active --quiet service kubelet": exit status 1 (373.639551ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestPause/serial/Pause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-348198 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-348198 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-348198 --output=json --layout=cluster: exit status 2 (382.866776ms)

                                                
                                                
-- stdout --
	{"Name":"pause-348198","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-348198","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-348198 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.82s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.11s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-348198 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-348198 --alsologtostderr -v=5: (1.112104718s)
--- PASS: TestPause/serial/PauseAgain (1.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-141863 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-141863 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (245.773622ms)

                                                
                                                
-- stdout --
	* [false-141863] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 14:04:13.165517 1192750 out.go:345] Setting OutFile to fd 1 ...
	I0923 14:04:13.165656 1192750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:04:13.165665 1192750 out.go:358] Setting ErrFile to fd 2...
	I0923 14:04:13.165669 1192750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:04:13.165914 1192750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1028234/.minikube/bin
	I0923 14:04:13.166307 1192750 out.go:352] Setting JSON to false
	I0923 14:04:13.167316 1192750 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":157600,"bootTime":1726942654,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0923 14:04:13.167411 1192750 start.go:139] virtualization:  
	I0923 14:04:13.170718 1192750 out.go:177] * [false-141863] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 14:04:13.174378 1192750 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 14:04:13.174511 1192750 notify.go:220] Checking for updates...
	I0923 14:04:13.182997 1192750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 14:04:13.185632 1192750 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-1028234/kubeconfig
	I0923 14:04:13.188261 1192750 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1028234/.minikube
	I0923 14:04:13.190869 1192750 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 14:04:13.193431 1192750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 14:04:13.196757 1192750 config.go:182] Loaded profile config "pause-348198": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 14:04:13.196855 1192750 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 14:04:13.228572 1192750 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 14:04:13.228689 1192750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 14:04:13.312349 1192750 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:51 SystemTime:2024-09-23 14:04:13.302325479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 14:04:13.312463 1192750 docker.go:318] overlay module found
	I0923 14:04:13.315417 1192750 out.go:177] * Using the docker driver based on user configuration
	I0923 14:04:13.318003 1192750 start.go:297] selected driver: docker
	I0923 14:04:13.318024 1192750 start.go:901] validating driver "docker" against <nil>
	I0923 14:04:13.318041 1192750 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 14:04:13.321382 1192750 out.go:201] 
	W0923 14:04:13.324028 1192750 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0923 14:04:13.326733 1192750 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-141863 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-141863" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-141863" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 23 Sep 2024 14:04:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-348198
contexts:
- context:
cluster: pause-348198
extensions:
- extension:
last-update: Mon, 23 Sep 2024 14:04:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-348198
name: pause-348198
current-context: pause-348198
kind: Config
preferences: {}
users:
- name: pause-348198
user:
client-certificate: /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/pause-348198/client.crt
client-key: /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/pause-348198/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-141863

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141863"

                                                
                                                
----------------------- debugLogs end: false-141863 [took: 4.387658134s] --------------------------------
helpers_test.go:175: Cleaning up "false-141863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-141863
--- PASS: TestNetworkPlugins/group/false (4.86s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-348198 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-348198 --alsologtostderr -v=5: (2.88476075s)
--- PASS: TestPause/serial/DeletePaused (2.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-348198
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-348198: exit status 1 (25.876903ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-348198: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (133.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.228136737 start -p stopped-upgrade-501175 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0923 14:06:50.070206 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.228136737 start -p stopped-upgrade-501175 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m30.707321164s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.228136737 -p stopped-upgrade-501175 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.228136737 -p stopped-upgrade-501175 stop: (1.299240612s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-501175 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0923 14:07:42.993117 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-501175 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.22573704s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (133.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-501175
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-501175: (1.001865894s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m35.631270732s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (52.243871088s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-141863 "pgrep -a kubelet"
I0923 14:12:12.418877 1033616 config.go:182] Loaded profile config "auto-141863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-141863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9bjzd" [dc835d87-15ba-4c80-ad1f-08354c42cbb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9bjzd" [dc835d87-15ba-4c80-ad1f-08354c42cbb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.007546388s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-141863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m11.408295669s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rvh6q" [ed80a52e-6b42-45ee-bb9b-8b5237b3c7f3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003939603s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-141863 "pgrep -a kubelet"
I0923 14:13:01.923294 1033616 config.go:182] Loaded profile config "flannel-141863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-141863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hts9t" [3abbe10a-b311-4439-96a5-e1df13e038ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hts9t" [3abbe10a-b311-4439-96a5-e1df13e038ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003294736s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-141863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.241355478s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dt4c7" [7e9f5472-5ec4-45ee-a24f-1feadfd4f824] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003669066s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-141863 "pgrep -a kubelet"
I0923 14:14:06.943695 1033616 config.go:182] Loaded profile config "calico-141863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-141863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lhs5b" [4deb8cf3-fa22-4c14-98ae-dce74c705187] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lhs5b" [4deb8cf3-fa22-4c14-98ae-dce74c705187] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003212102s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-141863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-141863 "pgrep -a kubelet"
I0923 14:14:34.468104 1033616 config.go:182] Loaded profile config "custom-flannel-141863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-141863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hmwqm" [99a4bbf5-bf69-4c76-b61b-b194d9335852] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hmwqm" [99a4bbf5-bf69-4c76-b61b-b194d9335852] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004912397s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (96.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m36.101038847s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (96.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-141863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m18.266551321s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-btzhw" [0134f377-2a6f-4bb4-a5ef-e6db3c030ee9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003899531s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-141863 "pgrep -a kubelet"
I0923 14:16:23.788453 1033616 config.go:182] Loaded profile config "kindnet-141863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-141863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mtsgj" [8d0be3d2-51b3-4655-a1ba-dd794193c2ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mtsgj" [8d0be3d2-51b3-4655-a1ba-dd794193c2ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004056216s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-141863 "pgrep -a kubelet"
I0923 14:16:30.535985 1033616 config.go:182] Loaded profile config "bridge-141863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-141863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8p4vh" [0f77450a-c222-4e31-a5a4-a28a0ae3d225] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8p4vh" [0f77450a-c222-4e31-a5a4-a28a0ae3d225] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005807171s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-141863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-141863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (47.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-141863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (47.491292258s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (47.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (175.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-545656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0923 14:17:12.823820 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:12.830202 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:12.841467 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:12.862767 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:12.904090 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:12.986225 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:13.147767 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:13.469073 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:14.111201 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:15.392519 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:17.954326 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:23.076517 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:33.318714 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:42.992973 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-545656 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m55.937753319s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (175.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-141863 "pgrep -a kubelet"
I0923 14:17:44.793737 1033616 config.go:182] Loaded profile config "enable-default-cni-141863": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-141863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9pnzn" [676bafae-31a9-477c-b023-f798706a6c85] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9pnzn" [676bafae-31a9-477c-b023-f798706a6c85] Running
E0923 14:17:53.800040 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:55.564532 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:55.571234 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:55.582613 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:55.604002 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:55.645566 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:55.727145 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:55.888875 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:56.210595 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:56.852972 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:58.134904 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004052389s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-141863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-141863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)
E0923 14:32:07.985285 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:32:12.811738 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:32:42.992833 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:32:45.206584 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:32:45.751704 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:32:55.564351 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-700594 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 14:18:34.762017 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:36.545316 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:00.516187 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:00.522718 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:00.534151 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:00.555515 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:00.596958 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:00.678403 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:00.839954 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:01.161588 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:01.803760 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:03.085101 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:05.646961 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:10.769010 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:17.507458 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:21.011150 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-700594 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m2.238730841s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-700594 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7aa411a2-3633-4a4b-9a2f-3c8a9473b2ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7aa411a2-3633-4a4b-9a2f-3c8a9473b2ad] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004599385s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-700594 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-700594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-700594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.036745156s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-700594 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-700594 --alsologtostderr -v=3
E0923 14:19:34.833790 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:34.840201 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:34.851905 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:34.873269 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:34.915536 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:34.996772 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:35.159511 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:35.481274 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:36.123539 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:37.405162 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:39.966874 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:41.492865 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:45.091204 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-700594 --alsologtostderr -v=3: (12.1071248s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-700594 -n no-preload-700594
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-700594 -n no-preload-700594: exit status 7 (78.057014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-700594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-700594 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 14:19:55.332548 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:56.683662 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-700594 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.391889409s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-700594 -n no-preload-700594
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-545656 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ee3851f-ba73-44a1-92f3-692fa6aa6442] Pending
helpers_test.go:344: "busybox" [2ee3851f-ba73-44a1-92f3-692fa6aa6442] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ee3851f-ba73-44a1-92f3-692fa6aa6442] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00504467s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-545656 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-545656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-545656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.264155169s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-545656 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-545656 --alsologtostderr -v=3
E0923 14:20:15.813838 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:20:22.454307 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-545656 --alsologtostderr -v=3: (12.198899225s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-545656 -n old-k8s-version-545656
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-545656 -n old-k8s-version-545656: exit status 7 (69.797487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-545656 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7q5tr" [2e1e24be-a560-4f42-be07-021c82fb1c90] Running
E0923 14:24:14.656618 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003306012s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7q5tr" [2e1e24be-a560-4f42-be07-021c82fb1c90] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003658086s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-700594 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-700594 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-700594 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-700594 -n no-preload-700594
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-700594 -n no-preload-700594: exit status 2 (341.032489ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-700594 -n no-preload-700594
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-700594 -n no-preload-700594: exit status 2 (310.246087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-700594 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-700594 -n no-preload-700594
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-700594 -n no-preload-700594
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-672015 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 14:24:34.833441 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:25:02.539686 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:25:29.068258 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-672015 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m20.341571346s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-672015 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ed6afc8e-6808-4d6f-98bb-7357c69c9bfb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ed6afc8e-6808-4d6f-98bb-7357c69c9bfb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004097169s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-672015 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-672015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-672015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.36386659s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-672015 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-672015 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-672015 --alsologtostderr -v=3: (12.089166934s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-672015 -n embed-certs-672015
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-672015 -n embed-certs-672015: exit status 7 (72.706501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-672015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-672015 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 14:26:17.506834 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:30.793831 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-672015 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.241145456s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-672015 -n embed-certs-672015
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-n6678" [c9e93fc3-eeaa-4d0f-a956-0ad0e65afec3] Running
E0923 14:26:45.217984 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003971235s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-n6678" [c9e93fc3-eeaa-4d0f-a956-0ad0e65afec3] Running
E0923 14:26:50.069572 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004466888s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-545656 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-545656 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-545656 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-545656 -n old-k8s-version-545656
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-545656 -n old-k8s-version-545656: exit status 2 (315.7923ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-545656 -n old-k8s-version-545656
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-545656 -n old-k8s-version-545656: exit status 2 (311.436465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-545656 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-545656 -n old-k8s-version-545656
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-545656 -n old-k8s-version-545656
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-310775 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 14:26:58.498484 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:27:12.812114 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/auto-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:27:26.062899 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:27:42.993167 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/addons-095355/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:27:45.206491 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:27:55.565038 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:28:12.910058 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/enable-default-cni-141863/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-310775 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m22.578480459s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-310775 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0cd0801a-8c52-4c63-aea0-30c2d87dc52a] Pending
helpers_test.go:344: "busybox" [0cd0801a-8c52-4c63-aea0-30c2d87dc52a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0cd0801a-8c52-4c63-aea0-30c2d87dc52a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003966488s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-310775 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-310775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-310775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.123337021s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-310775 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-310775 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-310775 --alsologtostderr -v=3: (12.089101188s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-310775 -n default-k8s-diff-port-310775
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-310775 -n default-k8s-diff-port-310775: exit status 7 (69.006112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-310775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-310775 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 14:29:00.516571 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/calico-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:24.125179 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:24.131556 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:24.142977 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:24.164391 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:24.205763 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:24.287188 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:24.448533 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:24.770280 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:25.412055 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:26.693456 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:29.255701 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:34.377323 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:34.833258 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/custom-flannel-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:44.619468 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:01.891301 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:01.898005 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:01.909564 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:01.931237 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:01.972961 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:02.054634 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:02.216302 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:02.538015 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:03.180048 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:04.461657 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:05.101610 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:07.023524 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:12.145392 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:22.386742 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-310775 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m25.096694351s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-310775 -n default-k8s-diff-port-310775
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wzjfp" [d2ccfe0d-8ceb-4083-9437-85878a07d899] Running
E0923 14:30:42.868124 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:46.063844 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/no-preload-700594/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007989608s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wzjfp" [d2ccfe0d-8ceb-4083-9437-85878a07d899] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004547465s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-672015 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-672015 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-672015 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-672015 -n embed-certs-672015
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-672015 -n embed-certs-672015: exit status 2 (334.396645ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-672015 -n embed-certs-672015
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-672015 -n embed-certs-672015: exit status 2 (320.014536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-672015 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-672015 -n embed-certs-672015
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-672015 -n embed-certs-672015
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-175144 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 14:31:17.506624 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/kindnet-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:23.829621 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/old-k8s-version-545656/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:30.794209 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/bridge-141863/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:33.137894 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-175144 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (38.64619846s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-175144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-175144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.245754823s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-175144 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-175144 --alsologtostderr -v=3: (1.291841033s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-175144 -n newest-cni-175144
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-175144 -n newest-cni-175144: exit status 7 (72.348758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-175144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-175144 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 14:31:50.069590 1033616 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/functional-006225/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-175144 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (15.799947047s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-175144 -n newest-cni-175144
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-175144 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-175144 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-175144 -n newest-cni-175144
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-175144 -n newest-cni-175144: exit status 2 (350.665316ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-175144 -n newest-cni-175144
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-175144 -n newest-cni-175144: exit status 2 (309.254812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-175144 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-175144 -n newest-cni-175144
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-175144 -n newest-cni-175144
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-grcdg" [789d61c2-4e62-4940-99d3-8e7717b5f90d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003331591s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-grcdg" [789d61c2-4e62-4940-99d3-8e7717b5f90d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004147132s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-310775 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-310775 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-310775 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-310775 -n default-k8s-diff-port-310775
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-310775 -n default-k8s-diff-port-310775: exit status 2 (292.703724ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-310775 -n default-k8s-diff-port-310775
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-310775 -n default-k8s-diff-port-310775: exit status 2 (298.968388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-310775 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-310775 -n default-k8s-diff-port-310775
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-310775 -n default-k8s-diff-port-310775
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-290856 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-290856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-290856
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-141863 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-141863" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-141863" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19690-1028234/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 23 Sep 2024 14:04:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-348198
contexts:
- context:
cluster: pause-348198
extensions:
- extension:
last-update: Mon, 23 Sep 2024 14:04:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-348198
name: pause-348198
current-context: pause-348198
kind: Config
preferences: {}
users:
- name: pause-348198
user:
client-certificate: /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/pause-348198/client.crt
client-key: /home/jenkins/minikube-integration/19690-1028234/.minikube/profiles/pause-348198/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-141863

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141863"

                                                
                                                
----------------------- debugLogs end: kubenet-141863 [took: 4.101521258s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-141863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-141863
--- SKIP: TestNetworkPlugins/group/kubenet (4.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-141863 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-141863" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-141863

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-141863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141863"

                                                
                                                
----------------------- debugLogs end: cilium-141863 [took: 5.552662567s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-141863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-141863
--- SKIP: TestNetworkPlugins/group/cilium (5.94s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-438415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-438415
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard