Test Report: Docker_Linux_containerd_arm64 19740

                    
                      f4f6e0076e771cedcca340e072cd1813dc91a89c:2024-10-02:36461
                    
                

Test fail (2/327)

Order failed test Duration
29 TestAddons/serial/Volcano 211.07
299 TestStartStop/group/old-k8s-version/serial/SecondStart 372.14
x
+
TestAddons/serial/Volcano (211.07s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:809: volcano-admission stabilized in 52.792779ms
addons_test.go:801: volcano-scheduler stabilized in 53.19551ms
addons_test.go:817: volcano-controller stabilized in 53.361225ms
addons_test.go:823: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-lxhq9" [15e9a076-8642-4de5-a622-a312a41b9ac2] Running
addons_test.go:823: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004094655s
addons_test.go:827: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-6lxlc" [22f45ea6-374f-4d38-a1d2-835e7e1ca3a6] Running
addons_test.go:827: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003474263s
addons_test.go:831: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-xd4mg" [0fe8b2bc-a44e-4626-aace-1ebaf5a33fb0] Running
addons_test.go:831: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003523345s
addons_test.go:836: (dbg) Run:  kubectl --context addons-515343 delete -n volcano-system job volcano-admission-init
addons_test.go:842: (dbg) Run:  kubectl --context addons-515343 create -f testdata/vcjob.yaml
addons_test.go:850: (dbg) Run:  kubectl --context addons-515343 get vcjob -n my-volcano
addons_test.go:868: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [11af941b-8557-4361-9ec0-99f1bb0fc79c] Pending
helpers_test.go:344: "test-job-nginx-0" [11af941b-8557-4361-9ec0-99f1bb0fc79c] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:868: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:868: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-515343 -n addons-515343
addons_test.go:868: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-10-01 23:50:05.031759402 +0000 UTC m=+426.774606043
addons_test.go:868: (dbg) Run:  kubectl --context addons-515343 describe po test-job-nginx-0 -n my-volcano
addons_test.go:868: (dbg) kubectl --context addons-515343 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-9df8a26a-02e4-49c8-8bca-2ad494247091
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cxmvl (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-cxmvl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:868: (dbg) Run:  kubectl --context addons-515343 logs test-job-nginx-0 -n my-volcano
addons_test.go:868: (dbg) kubectl --context addons-515343 logs test-job-nginx-0 -n my-volcano:
addons_test.go:869: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-515343
helpers_test.go:235: (dbg) docker inspect addons-515343:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e099d2214ebe58fd32760e30ceaf4804039e4da2228bbdb4030df570db9c4d35",
	        "Created": "2024-10-01T23:43:36.241025221Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1751779,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-01T23:43:36.388755603Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/e099d2214ebe58fd32760e30ceaf4804039e4da2228bbdb4030df570db9c4d35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e099d2214ebe58fd32760e30ceaf4804039e4da2228bbdb4030df570db9c4d35/hostname",
	        "HostsPath": "/var/lib/docker/containers/e099d2214ebe58fd32760e30ceaf4804039e4da2228bbdb4030df570db9c4d35/hosts",
	        "LogPath": "/var/lib/docker/containers/e099d2214ebe58fd32760e30ceaf4804039e4da2228bbdb4030df570db9c4d35/e099d2214ebe58fd32760e30ceaf4804039e4da2228bbdb4030df570db9c4d35-json.log",
	        "Name": "/addons-515343",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-515343:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-515343",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8bb17bdcdfeb5819ed554e53ba7031eb926eb88b2c075a7d7393a03d197be84-init/diff:/var/lib/docker/overlay2/f36fd63656976433bbd6b304cfd5552e0c71ee74203e3ec14aaa10779b0a0aa6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8bb17bdcdfeb5819ed554e53ba7031eb926eb88b2c075a7d7393a03d197be84/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8bb17bdcdfeb5819ed554e53ba7031eb926eb88b2c075a7d7393a03d197be84/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8bb17bdcdfeb5819ed554e53ba7031eb926eb88b2c075a7d7393a03d197be84/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-515343",
	                "Source": "/var/lib/docker/volumes/addons-515343/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-515343",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-515343",
	                "name.minikube.sigs.k8s.io": "addons-515343",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c30d7261afe97b8b951b463e40dc9abf62b3bd7607ee1e13c8c5836ab004b6",
	            "SandboxKey": "/var/run/docker/netns/93c30d7261af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34664"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34665"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34668"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34666"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34667"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-515343": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "60bedaf6dd9776c9521b427aecb914c972576727ac531475c82cb070a4120c12",
	                    "EndpointID": "c9700e713fafbaa7d9a227af8961849ce29df746e474eeed115d4442236d0dd4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-515343",
	                        "e099d2214ebe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-515343 -n addons-515343
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-515343 logs -n 25: (1.5600193s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-994430   | jenkins | v1.34.0 | 01 Oct 24 23:42 UTC |                     |
	|         | -p download-only-994430              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC | 01 Oct 24 23:43 UTC |
	| delete  | -p download-only-994430              | download-only-994430   | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC | 01 Oct 24 23:43 UTC |
	| start   | -o=json --download-only              | download-only-271061   | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC |                     |
	|         | -p download-only-271061              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC | 01 Oct 24 23:43 UTC |
	| delete  | -p download-only-271061              | download-only-271061   | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC | 01 Oct 24 23:43 UTC |
	| delete  | -p download-only-994430              | download-only-994430   | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC | 01 Oct 24 23:43 UTC |
	| delete  | -p download-only-271061              | download-only-271061   | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC | 01 Oct 24 23:43 UTC |
	| start   | --download-only -p                   | download-docker-721980 | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC |                     |
	|         | download-docker-721980               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-721980            | download-docker-721980 | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC | 01 Oct 24 23:43 UTC |
	| start   | --download-only -p                   | binary-mirror-632362   | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC |                     |
	|         | binary-mirror-632362                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39787               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-632362              | binary-mirror-632362   | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC | 01 Oct 24 23:43 UTC |
	| addons  | enable dashboard -p                  | addons-515343          | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC |                     |
	|         | addons-515343                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-515343          | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC |                     |
	|         | addons-515343                        |                        |         |         |                     |                     |
	| start   | -p addons-515343 --wait=true         | addons-515343          | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC | 01 Oct 24 23:46 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:43:12
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:43:12.283688 1751274 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:43:12.283861 1751274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:43:12.283871 1751274 out.go:358] Setting ErrFile to fd 2...
	I1001 23:43:12.283876 1751274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:43:12.284156 1751274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	I1001 23:43:12.284648 1751274 out.go:352] Setting JSON to false
	I1001 23:43:12.285610 1751274 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26740,"bootTime":1727799453,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 23:43:12.285678 1751274 start.go:139] virtualization:  
	I1001 23:43:12.288285 1751274 out.go:177] * [addons-515343] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 23:43:12.290871 1751274 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:43:12.290920 1751274 notify.go:220] Checking for updates...
	I1001 23:43:12.294161 1751274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:43:12.296142 1751274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1001 23:43:12.298132 1751274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	I1001 23:43:12.300234 1751274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 23:43:12.302154 1751274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:43:12.304825 1751274 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:43:12.330034 1751274 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:43:12.330157 1751274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:43:12.385105 1751274 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-01 23:43:12.376172993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:43:12.385208 1751274 docker.go:318] overlay module found
	I1001 23:43:12.387343 1751274 out.go:177] * Using the docker driver based on user configuration
	I1001 23:43:12.389031 1751274 start.go:297] selected driver: docker
	I1001 23:43:12.389049 1751274 start.go:901] validating driver "docker" against <nil>
	I1001 23:43:12.389063 1751274 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:43:12.389690 1751274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:43:12.439461 1751274 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-01 23:43:12.42873126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:43:12.439682 1751274 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:43:12.439918 1751274 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:43:12.442217 1751274 out.go:177] * Using Docker driver with root privileges
	I1001 23:43:12.444011 1751274 cni.go:84] Creating CNI manager for ""
	I1001 23:43:12.444075 1751274 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 23:43:12.444087 1751274 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 23:43:12.444167 1751274 start.go:340] cluster config:
	{Name:addons-515343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-515343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:43:12.446346 1751274 out.go:177] * Starting "addons-515343" primary control-plane node in "addons-515343" cluster
	I1001 23:43:12.448623 1751274 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1001 23:43:12.450553 1751274 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1001 23:43:12.452484 1751274 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1001 23:43:12.452543 1751274 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1001 23:43:12.452561 1751274 cache.go:56] Caching tarball of preloaded images
	I1001 23:43:12.452543 1751274 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 23:43:12.452663 1751274 preload.go:172] Found /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 23:43:12.452674 1751274 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1001 23:43:12.453023 1751274 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/config.json ...
	I1001 23:43:12.453047 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/config.json: {Name:mk90dd28c1f187625c39a2cec18307e40ee899c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:12.467544 1751274 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 23:43:12.467663 1751274 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 23:43:12.467687 1751274 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1001 23:43:12.467693 1751274 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1001 23:43:12.467703 1751274 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1001 23:43:12.467710 1751274 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1001 23:43:28.959003 1751274 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1001 23:43:28.959059 1751274 cache.go:194] Successfully downloaded all kic artifacts
	I1001 23:43:28.959102 1751274 start.go:360] acquireMachinesLock for addons-515343: {Name:mk65d63286b6974a264f1f9287795068acbf6fa5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:43:28.959215 1751274 start.go:364] duration metric: took 90.032µs to acquireMachinesLock for "addons-515343"
	I1001 23:43:28.959246 1751274 start.go:93] Provisioning new machine with config: &{Name:addons-515343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-515343 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1001 23:43:28.959324 1751274 start.go:125] createHost starting for "" (driver="docker")
	I1001 23:43:28.962545 1751274 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1001 23:43:28.962812 1751274 start.go:159] libmachine.API.Create for "addons-515343" (driver="docker")
	I1001 23:43:28.962849 1751274 client.go:168] LocalClient.Create starting
	I1001 23:43:28.962961 1751274 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem
	I1001 23:43:29.329388 1751274 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/cert.pem
	I1001 23:43:29.919221 1751274 cli_runner.go:164] Run: docker network inspect addons-515343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1001 23:43:29.934759 1751274 cli_runner.go:211] docker network inspect addons-515343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1001 23:43:29.934863 1751274 network_create.go:284] running [docker network inspect addons-515343] to gather additional debugging logs...
	I1001 23:43:29.934882 1751274 cli_runner.go:164] Run: docker network inspect addons-515343
	W1001 23:43:29.952265 1751274 cli_runner.go:211] docker network inspect addons-515343 returned with exit code 1
	I1001 23:43:29.952300 1751274 network_create.go:287] error running [docker network inspect addons-515343]: docker network inspect addons-515343: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-515343 not found
	I1001 23:43:29.952314 1751274 network_create.go:289] output of [docker network inspect addons-515343]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-515343 not found
	
	** /stderr **
	I1001 23:43:29.952421 1751274 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 23:43:29.968389 1751274 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400181be90}
	I1001 23:43:29.968432 1751274 network_create.go:124] attempt to create docker network addons-515343 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1001 23:43:29.968510 1751274 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-515343 addons-515343
	I1001 23:43:30.050466 1751274 network_create.go:108] docker network addons-515343 192.168.49.0/24 created
	I1001 23:43:30.050503 1751274 kic.go:121] calculated static IP "192.168.49.2" for the "addons-515343" container
	I1001 23:43:30.050598 1751274 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1001 23:43:30.067720 1751274 cli_runner.go:164] Run: docker volume create addons-515343 --label name.minikube.sigs.k8s.io=addons-515343 --label created_by.minikube.sigs.k8s.io=true
	I1001 23:43:30.086602 1751274 oci.go:103] Successfully created a docker volume addons-515343
	I1001 23:43:30.086719 1751274 cli_runner.go:164] Run: docker run --rm --name addons-515343-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-515343 --entrypoint /usr/bin/test -v addons-515343:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1001 23:43:32.154461 1751274 cli_runner.go:217] Completed: docker run --rm --name addons-515343-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-515343 --entrypoint /usr/bin/test -v addons-515343:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (2.067691902s)
	I1001 23:43:32.154490 1751274 oci.go:107] Successfully prepared a docker volume addons-515343
	I1001 23:43:32.154514 1751274 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1001 23:43:32.154533 1751274 kic.go:194] Starting extracting preloaded images to volume ...
	I1001 23:43:32.154608 1751274 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-515343:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1001 23:43:36.171747 1751274 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-515343:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.017099627s)
	I1001 23:43:36.171782 1751274 kic.go:203] duration metric: took 4.017245806s to extract preloaded images to volume ...
	W1001 23:43:36.171927 1751274 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1001 23:43:36.172053 1751274 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1001 23:43:36.227025 1751274 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-515343 --name addons-515343 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-515343 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-515343 --network addons-515343 --ip 192.168.49.2 --volume addons-515343:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1001 23:43:36.560250 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Running}}
	I1001 23:43:36.578324 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:43:36.601609 1751274 cli_runner.go:164] Run: docker exec addons-515343 stat /var/lib/dpkg/alternatives/iptables
	I1001 23:43:36.678888 1751274 oci.go:144] the created container "addons-515343" has a running status.
	I1001 23:43:36.678923 1751274 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa...
	I1001 23:43:37.500696 1751274 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1001 23:43:37.530107 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:43:37.553786 1751274 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1001 23:43:37.553806 1751274 kic_runner.go:114] Args: [docker exec --privileged addons-515343 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1001 23:43:37.618127 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:43:37.635236 1751274 machine.go:93] provisionDockerMachine start ...
	I1001 23:43:37.635337 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:43:37.654315 1751274 main.go:141] libmachine: Using SSH client type: native
	I1001 23:43:37.654590 1751274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1001 23:43:37.654607 1751274 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 23:43:37.791817 1751274 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-515343
	
	I1001 23:43:37.791842 1751274 ubuntu.go:169] provisioning hostname "addons-515343"
	I1001 23:43:37.791907 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:43:37.808218 1751274 main.go:141] libmachine: Using SSH client type: native
	I1001 23:43:37.808559 1751274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1001 23:43:37.808578 1751274 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-515343 && echo "addons-515343" | sudo tee /etc/hostname
	I1001 23:43:37.951947 1751274 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-515343
	
	I1001 23:43:37.952038 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:43:37.969133 1751274 main.go:141] libmachine: Using SSH client type: native
	I1001 23:43:37.969374 1751274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1001 23:43:37.969396 1751274 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-515343' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-515343/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-515343' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:43:38.104176 1751274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:43:38.104203 1751274 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19740-1745120/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-1745120/.minikube}
	I1001 23:43:38.104230 1751274 ubuntu.go:177] setting up certificates
	I1001 23:43:38.104240 1751274 provision.go:84] configureAuth start
	I1001 23:43:38.104299 1751274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-515343
	I1001 23:43:38.120405 1751274 provision.go:143] copyHostCerts
	I1001 23:43:38.120523 1751274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.pem (1082 bytes)
	I1001 23:43:38.120679 1751274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-1745120/.minikube/cert.pem (1123 bytes)
	I1001 23:43:38.120738 1751274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-1745120/.minikube/key.pem (1675 bytes)
	I1001 23:43:38.120784 1751274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca-key.pem org=jenkins.addons-515343 san=[127.0.0.1 192.168.49.2 addons-515343 localhost minikube]
	I1001 23:43:38.403779 1751274 provision.go:177] copyRemoteCerts
	I1001 23:43:38.403852 1751274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:43:38.403895 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:43:38.423022 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:43:38.517540 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1001 23:43:38.541650 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 23:43:38.564147 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:43:38.586989 1751274 provision.go:87] duration metric: took 482.726932ms to configureAuth
	I1001 23:43:38.587017 1751274 ubuntu.go:193] setting minikube options for container-runtime
	I1001 23:43:38.587231 1751274 config.go:182] Loaded profile config "addons-515343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 23:43:38.587247 1751274 machine.go:96] duration metric: took 951.982545ms to provisionDockerMachine
	I1001 23:43:38.587254 1751274 client.go:171] duration metric: took 9.624395379s to LocalClient.Create
	I1001 23:43:38.587275 1751274 start.go:167] duration metric: took 9.624464727s to libmachine.API.Create "addons-515343"
	I1001 23:43:38.587287 1751274 start.go:293] postStartSetup for "addons-515343" (driver="docker")
	I1001 23:43:38.587296 1751274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:43:38.587349 1751274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:43:38.587401 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:43:38.603456 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:43:38.697370 1751274 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:43:38.700408 1751274 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1001 23:43:38.700470 1751274 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1001 23:43:38.700493 1751274 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1001 23:43:38.700504 1751274 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1001 23:43:38.700515 1751274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-1745120/.minikube/addons for local assets ...
	I1001 23:43:38.700586 1751274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-1745120/.minikube/files for local assets ...
	I1001 23:43:38.700615 1751274 start.go:296] duration metric: took 113.322197ms for postStartSetup
	I1001 23:43:38.700921 1751274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-515343
	I1001 23:43:38.716282 1751274 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/config.json ...
	I1001 23:43:38.716662 1751274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 23:43:38.716715 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:43:38.731969 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:43:38.821637 1751274 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1001 23:43:38.825612 1751274 start.go:128] duration metric: took 9.866273266s to createHost
	I1001 23:43:38.825635 1751274 start.go:83] releasing machines lock for "addons-515343", held for 9.866406465s
	I1001 23:43:38.825704 1751274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-515343
	I1001 23:43:38.840973 1751274 ssh_runner.go:195] Run: cat /version.json
	I1001 23:43:38.841024 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:43:38.841036 1751274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:43:38.841094 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:43:38.859256 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:43:38.869986 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:43:39.078001 1751274 ssh_runner.go:195] Run: systemctl --version
	I1001 23:43:39.082104 1751274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 23:43:39.086149 1751274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1001 23:43:39.109819 1751274 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1001 23:43:39.109893 1751274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:43:39.137998 1751274 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1001 23:43:39.138021 1751274 start.go:495] detecting cgroup driver to use...
	I1001 23:43:39.138054 1751274 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1001 23:43:39.138113 1751274 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1001 23:43:39.150128 1751274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 23:43:39.161110 1751274 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:43:39.161225 1751274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:43:39.174900 1751274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:43:39.189271 1751274 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:43:39.285252 1751274 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:43:39.377128 1751274 docker.go:233] disabling docker service ...
	I1001 23:43:39.377215 1751274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:43:39.396248 1751274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:43:39.407623 1751274 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:43:39.489686 1751274 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:43:39.572425 1751274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:43:39.583456 1751274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:43:39.599204 1751274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1001 23:43:39.608913 1751274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 23:43:39.618268 1751274 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 23:43:39.618335 1751274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 23:43:39.628085 1751274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 23:43:39.637599 1751274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 23:43:39.646909 1751274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 23:43:39.656200 1751274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:43:39.664962 1751274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 23:43:39.675009 1751274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1001 23:43:39.684817 1751274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1001 23:43:39.694553 1751274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:43:39.703012 1751274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:43:39.711353 1751274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:43:39.789163 1751274 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 23:43:39.927015 1751274 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1001 23:43:39.927149 1751274 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1001 23:43:39.930731 1751274 start.go:563] Will wait 60s for crictl version
	I1001 23:43:39.930840 1751274 ssh_runner.go:195] Run: which crictl
	I1001 23:43:39.934274 1751274 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:43:39.974928 1751274 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1001 23:43:39.975057 1751274 ssh_runner.go:195] Run: containerd --version
	I1001 23:43:39.996475 1751274 ssh_runner.go:195] Run: containerd --version
	I1001 23:43:40.024269 1751274 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1001 23:43:40.026538 1751274 cli_runner.go:164] Run: docker network inspect addons-515343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 23:43:40.043175 1751274 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1001 23:43:40.047006 1751274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:43:40.058049 1751274 kubeadm.go:883] updating cluster {Name:addons-515343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-515343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:43:40.058170 1751274 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1001 23:43:40.058239 1751274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:43:40.096432 1751274 containerd.go:627] all images are preloaded for containerd runtime.
	I1001 23:43:40.096479 1751274 containerd.go:534] Images already preloaded, skipping extraction
	I1001 23:43:40.096545 1751274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:43:40.131597 1751274 containerd.go:627] all images are preloaded for containerd runtime.
	I1001 23:43:40.131620 1751274 cache_images.go:84] Images are preloaded, skipping loading
	I1001 23:43:40.131628 1751274 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I1001 23:43:40.131722 1751274 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-515343 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-515343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:43:40.131785 1751274 ssh_runner.go:195] Run: sudo crictl info
	I1001 23:43:40.167984 1751274 cni.go:84] Creating CNI manager for ""
	I1001 23:43:40.168054 1751274 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 23:43:40.168079 1751274 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:43:40.168123 1751274 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-515343 NodeName:addons-515343 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:43:40.168270 1751274 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-515343"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:43:40.168360 1751274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:43:40.176970 1751274 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:43:40.177042 1751274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 23:43:40.185528 1751274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1001 23:43:40.203063 1751274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:43:40.221364 1751274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I1001 23:43:40.239098 1751274 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1001 23:43:40.242556 1751274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:43:40.253071 1751274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:43:40.333414 1751274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:43:40.352898 1751274 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343 for IP: 192.168.49.2
	I1001 23:43:40.352917 1751274 certs.go:194] generating shared ca certs ...
	I1001 23:43:40.352932 1751274 certs.go:226] acquiring lock for ca certs: {Name:mkeb93c689dc39169cb991acba6d63d702f9e0e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:40.353069 1751274 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.key
	I1001 23:43:41.098808 1751274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.crt ...
	I1001 23:43:41.098846 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.crt: {Name:mk54087b54ec0c66dd29feb468d84ca15fce39b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:41.099077 1751274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.key ...
	I1001 23:43:41.099094 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.key: {Name:mk38cebd671d351c47f34bcdac364c2a9c979bef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:41.099184 1751274 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.key
	I1001 23:43:41.603024 1751274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.crt ...
	I1001 23:43:41.603058 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.crt: {Name:mkca9ede23d19197bf2a28548f9b7abf8c3ed29f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:41.603247 1751274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.key ...
	I1001 23:43:41.603260 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.key: {Name:mk0871687bdeccb6d2e36b84a56b72832700bedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:41.603343 1751274 certs.go:256] generating profile certs ...
	I1001 23:43:41.603411 1751274 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.key
	I1001 23:43:41.603433 1751274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt with IP's: []
	I1001 23:43:42.243920 1751274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt ...
	I1001 23:43:42.243953 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: {Name:mke77b97c3906d154a9f1afefd3ad39977ad97ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:42.244143 1751274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.key ...
	I1001 23:43:42.244155 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.key: {Name:mk57cd01b0c8154e24d832f7f4e790362527b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:42.244238 1751274 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.key.19aca2dc
	I1001 23:43:42.244260 1751274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.crt.19aca2dc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1001 23:43:42.854381 1751274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.crt.19aca2dc ...
	I1001 23:43:42.854414 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.crt.19aca2dc: {Name:mka13dfecc3390162cf88005552eaf170f356839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:42.854598 1751274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.key.19aca2dc ...
	I1001 23:43:42.854612 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.key.19aca2dc: {Name:mk51bc8d58b678e97289f57cf69cfdda233ea9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:42.854700 1751274 certs.go:381] copying /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.crt.19aca2dc -> /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.crt
	I1001 23:43:42.854778 1751274 certs.go:385] copying /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.key.19aca2dc -> /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.key
	I1001 23:43:42.854834 1751274 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/proxy-client.key
	I1001 23:43:42.854851 1751274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/proxy-client.crt with IP's: []
	I1001 23:43:43.065648 1751274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/proxy-client.crt ...
	I1001 23:43:43.065683 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/proxy-client.crt: {Name:mk3ceadd4aae1032c4fb624ee769c3c1fed46782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:43.065860 1751274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/proxy-client.key ...
	I1001 23:43:43.065868 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/proxy-client.key: {Name:mkf9758b38641de51defd2e960aa4c19b698f76b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:43:43.066038 1751274 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:43:43.066074 1751274 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem (1082 bytes)
	I1001 23:43:43.066104 1751274 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:43:43.066128 1751274 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/key.pem (1675 bytes)
	I1001 23:43:43.066779 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:43:43.091639 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:43:43.116160 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:43:43.140711 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 23:43:43.164568 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 23:43:43.188734 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 23:43:43.213081 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:43:43.237697 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 23:43:43.262404 1751274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:43:43.286520 1751274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:43:43.303986 1751274 ssh_runner.go:195] Run: openssl version
	I1001 23:43:43.309764 1751274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:43:43.319457 1751274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:43:43.322669 1751274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 23:43 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:43:43.322729 1751274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:43:43.329421 1751274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:43:43.338577 1751274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:43:43.341722 1751274 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:43:43.341770 1751274 kubeadm.go:392] StartCluster: {Name:addons-515343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-515343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:43:43.341845 1751274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1001 23:43:43.341904 1751274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:43:43.385904 1751274 cri.go:89] found id: ""
	I1001 23:43:43.385984 1751274 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 23:43:43.395845 1751274 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 23:43:43.405075 1751274 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1001 23:43:43.405143 1751274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 23:43:43.415319 1751274 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 23:43:43.415341 1751274 kubeadm.go:157] found existing configuration files:
	
	I1001 23:43:43.415393 1751274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 23:43:43.424477 1751274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 23:43:43.424546 1751274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 23:43:43.433731 1751274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 23:43:43.442681 1751274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 23:43:43.442768 1751274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 23:43:43.451684 1751274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 23:43:43.460656 1751274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 23:43:43.460748 1751274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 23:43:43.469489 1751274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 23:43:43.478083 1751274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 23:43:43.478167 1751274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 23:43:43.486775 1751274 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1001 23:43:43.528144 1751274 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 23:43:43.528512 1751274 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 23:43:43.547922 1751274 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1001 23:43:43.547999 1751274 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1001 23:43:43.548044 1751274 kubeadm.go:310] OS: Linux
	I1001 23:43:43.548094 1751274 kubeadm.go:310] CGROUPS_CPU: enabled
	I1001 23:43:43.548145 1751274 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1001 23:43:43.548195 1751274 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1001 23:43:43.548246 1751274 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1001 23:43:43.548298 1751274 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1001 23:43:43.548353 1751274 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1001 23:43:43.548403 1751274 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1001 23:43:43.548463 1751274 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1001 23:43:43.548514 1751274 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1001 23:43:43.608328 1751274 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 23:43:43.608524 1751274 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 23:43:43.608676 1751274 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 23:43:43.613979 1751274 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 23:43:43.618098 1751274 out.go:235]   - Generating certificates and keys ...
	I1001 23:43:43.618240 1751274 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 23:43:43.618311 1751274 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 23:43:44.082654 1751274 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 23:43:44.981040 1751274 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 23:43:45.091175 1751274 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 23:43:45.775755 1751274 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 23:43:46.568912 1751274 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 23:43:46.569202 1751274 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-515343 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 23:43:46.918879 1751274 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 23:43:46.919270 1751274 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-515343 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 23:43:47.260896 1751274 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 23:43:47.681085 1751274 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 23:43:48.267935 1751274 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 23:43:48.268224 1751274 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 23:43:49.041646 1751274 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 23:43:49.625513 1751274 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 23:43:50.439774 1751274 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 23:43:50.836359 1751274 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 23:43:51.352200 1751274 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 23:43:51.352916 1751274 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 23:43:51.355796 1751274 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 23:43:51.358032 1751274 out.go:235]   - Booting up control plane ...
	I1001 23:43:51.358131 1751274 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 23:43:51.358207 1751274 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 23:43:51.358908 1751274 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 23:43:51.370172 1751274 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 23:43:51.376663 1751274 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 23:43:51.376730 1751274 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 23:43:51.478291 1751274 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 23:43:51.478414 1751274 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 23:43:52.979408 1751274 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501209354s
	I1001 23:43:52.979510 1751274 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 23:43:59.980804 1751274 kubeadm.go:310] [api-check] The API server is healthy after 7.001364821s
	I1001 23:44:00.000343 1751274 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 23:44:00.022680 1751274 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 23:44:00.079783 1751274 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 23:44:00.080275 1751274 kubeadm.go:310] [mark-control-plane] Marking the node addons-515343 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 23:44:00.093382 1751274 kubeadm.go:310] [bootstrap-token] Using token: avk156.psylypjrc7ibbnhk
	I1001 23:44:00.096150 1751274 out.go:235]   - Configuring RBAC rules ...
	I1001 23:44:00.096298 1751274 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 23:44:00.102896 1751274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 23:44:00.114919 1751274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 23:44:00.122008 1751274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 23:44:00.128422 1751274 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 23:44:00.134792 1751274 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 23:44:00.387843 1751274 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 23:44:00.812092 1751274 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 23:44:01.387401 1751274 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 23:44:01.388382 1751274 kubeadm.go:310] 
	I1001 23:44:01.388472 1751274 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 23:44:01.388480 1751274 kubeadm.go:310] 
	I1001 23:44:01.388556 1751274 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 23:44:01.388561 1751274 kubeadm.go:310] 
	I1001 23:44:01.388586 1751274 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 23:44:01.388644 1751274 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 23:44:01.388695 1751274 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 23:44:01.388703 1751274 kubeadm.go:310] 
	I1001 23:44:01.388756 1751274 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 23:44:01.388761 1751274 kubeadm.go:310] 
	I1001 23:44:01.388807 1751274 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 23:44:01.388812 1751274 kubeadm.go:310] 
	I1001 23:44:01.388863 1751274 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 23:44:01.388936 1751274 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 23:44:01.389003 1751274 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 23:44:01.389008 1751274 kubeadm.go:310] 
	I1001 23:44:01.389092 1751274 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 23:44:01.389167 1751274 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 23:44:01.389171 1751274 kubeadm.go:310] 
	I1001 23:44:01.389253 1751274 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token avk156.psylypjrc7ibbnhk \
	I1001 23:44:01.389354 1751274 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b6a42a4899bb25e92de4c268dbb6f72dad146ada217ace8f556dc8ad8b3030a2 \
	I1001 23:44:01.389374 1751274 kubeadm.go:310] 	--control-plane 
	I1001 23:44:01.389378 1751274 kubeadm.go:310] 
	I1001 23:44:01.389749 1751274 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 23:44:01.389768 1751274 kubeadm.go:310] 
	I1001 23:44:01.389850 1751274 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token avk156.psylypjrc7ibbnhk \
	I1001 23:44:01.389957 1751274 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b6a42a4899bb25e92de4c268dbb6f72dad146ada217ace8f556dc8ad8b3030a2 
	I1001 23:44:01.393852 1751274 kubeadm.go:310] W1001 23:43:43.524806    1033 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:44:01.394158 1751274 kubeadm.go:310] W1001 23:43:43.525750    1033 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:44:01.394376 1751274 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1001 23:44:01.394484 1751274 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 23:44:01.394502 1751274 cni.go:84] Creating CNI manager for ""
	I1001 23:44:01.394517 1751274 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 23:44:01.398483 1751274 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 23:44:01.400384 1751274 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 23:44:01.404059 1751274 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 23:44:01.404081 1751274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 23:44:01.424259 1751274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 23:44:01.707158 1751274 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 23:44:01.707292 1751274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:44:01.707376 1751274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-515343 minikube.k8s.io/updated_at=2024_10_01T23_44_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=addons-515343 minikube.k8s.io/primary=true
	I1001 23:44:01.715560 1751274 ops.go:34] apiserver oom_adj: -16
	I1001 23:44:01.899379 1751274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:44:02.399783 1751274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:44:02.899497 1751274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:44:03.400093 1751274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:44:03.899976 1751274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:44:04.399460 1751274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:44:04.900320 1751274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:44:05.399633 1751274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:44:05.503193 1751274 kubeadm.go:1113] duration metric: took 3.795950091s to wait for elevateKubeSystemPrivileges
	I1001 23:44:05.503230 1751274 kubeadm.go:394] duration metric: took 22.161463099s to StartCluster
	I1001 23:44:05.503248 1751274 settings.go:142] acquiring lock: {Name:mk200f8894606b147c1230e7434ca41f474a2cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:44:05.503372 1751274 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1001 23:44:05.503747 1751274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/kubeconfig: {Name:mk014bd742e0b0f4a72d987c0fd643ed22274647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:44:05.503942 1751274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 23:44:05.503969 1751274 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1001 23:44:05.504210 1751274 config.go:182] Loaded profile config "addons-515343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 23:44:05.504247 1751274 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 23:44:05.504320 1751274 addons.go:69] Setting yakd=true in profile "addons-515343"
	I1001 23:44:05.504335 1751274 addons.go:234] Setting addon yakd=true in "addons-515343"
	I1001 23:44:05.504358 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.504762 1751274 addons.go:69] Setting inspektor-gadget=true in profile "addons-515343"
	I1001 23:44:05.504780 1751274 addons.go:234] Setting addon inspektor-gadget=true in "addons-515343"
	I1001 23:44:05.504805 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.504836 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.505280 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.505477 1751274 addons.go:69] Setting metrics-server=true in profile "addons-515343"
	I1001 23:44:05.505496 1751274 addons.go:234] Setting addon metrics-server=true in "addons-515343"
	I1001 23:44:05.505519 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.505922 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.508279 1751274 addons.go:69] Setting cloud-spanner=true in profile "addons-515343"
	I1001 23:44:05.508362 1751274 addons.go:234] Setting addon cloud-spanner=true in "addons-515343"
	I1001 23:44:05.509671 1751274 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-515343"
	I1001 23:44:05.509696 1751274 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-515343"
	I1001 23:44:05.509720 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.510190 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.510453 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.511695 1751274 out.go:177] * Verifying Kubernetes components...
	I1001 23:44:05.511896 1751274 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-515343"
	I1001 23:44:05.511946 1751274 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-515343"
	I1001 23:44:05.511975 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.514556 1751274 addons.go:69] Setting registry=true in profile "addons-515343"
	I1001 23:44:05.514583 1751274 addons.go:234] Setting addon registry=true in "addons-515343"
	I1001 23:44:05.514608 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.515023 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.515531 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.522462 1751274 addons.go:69] Setting default-storageclass=true in profile "addons-515343"
	I1001 23:44:05.522493 1751274 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-515343"
	I1001 23:44:05.522886 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.526829 1751274 addons.go:69] Setting storage-provisioner=true in profile "addons-515343"
	I1001 23:44:05.526909 1751274 addons.go:234] Setting addon storage-provisioner=true in "addons-515343"
	I1001 23:44:05.526979 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.527629 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.538506 1751274 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-515343"
	I1001 23:44:05.538581 1751274 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-515343"
	I1001 23:44:05.542250 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.546443 1751274 addons.go:69] Setting gcp-auth=true in profile "addons-515343"
	I1001 23:44:05.546519 1751274 mustload.go:65] Loading cluster: addons-515343
	I1001 23:44:05.546778 1751274 config.go:182] Loaded profile config "addons-515343": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 23:44:05.547137 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.568764 1751274 addons.go:69] Setting ingress=true in profile "addons-515343"
	I1001 23:44:05.568843 1751274 addons.go:234] Setting addon ingress=true in "addons-515343"
	I1001 23:44:05.568940 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.569130 1751274 addons.go:69] Setting ingress-dns=true in profile "addons-515343"
	I1001 23:44:05.569164 1751274 addons.go:234] Setting addon ingress-dns=true in "addons-515343"
	I1001 23:44:05.569215 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.571884 1751274 addons.go:69] Setting volcano=true in profile "addons-515343"
	I1001 23:44:05.571953 1751274 addons.go:234] Setting addon volcano=true in "addons-515343"
	I1001 23:44:05.572014 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.572607 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.583056 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.588701 1751274 addons.go:69] Setting volumesnapshots=true in profile "addons-515343"
	I1001 23:44:05.588774 1751274 addons.go:234] Setting addon volumesnapshots=true in "addons-515343"
	I1001 23:44:05.588826 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.589463 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.614872 1751274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:44:05.637822 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.651567 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.676883 1751274 addons.go:234] Setting addon default-storageclass=true in "addons-515343"
	I1001 23:44:05.676932 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.677370 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.677533 1751274 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1001 23:44:05.693844 1751274 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1001 23:44:05.694079 1751274 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1001 23:44:05.710898 1751274 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1001 23:44:05.712894 1751274 out.go:177]   - Using image docker.io/registry:2.8.3
	I1001 23:44:05.717555 1751274 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 23:44:05.717582 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 23:44:05.717650 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.718001 1751274 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1001 23:44:05.718015 1751274 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1001 23:44:05.718064 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.741250 1751274 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 23:44:05.741275 1751274 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 23:44:05.741346 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.744653 1751274 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 23:44:05.744686 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 23:44:05.744757 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.766247 1751274 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 23:44:05.772699 1751274 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 23:44:05.773569 1751274 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 23:44:05.773584 1751274 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 23:44:05.773648 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.796555 1751274 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 23:44:05.798690 1751274 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 23:44:05.800760 1751274 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 23:44:05.802845 1751274 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 23:44:05.805997 1751274 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1001 23:44:05.808196 1751274 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1001 23:44:05.808212 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 23:44:05.808277 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.812421 1751274 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-515343"
	I1001 23:44:05.812534 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.813092 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:05.818676 1751274 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 23:44:05.820910 1751274 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 23:44:05.821294 1751274 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I1001 23:44:05.864895 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:05.884272 1751274 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I1001 23:44:05.884508 1751274 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 23:44:05.884665 1751274 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1001 23:44:05.884745 1751274 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 23:44:05.917940 1751274 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 23:44:05.918011 1751274 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 23:44:05.918118 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.927627 1751274 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 23:44:05.927748 1751274 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 23:44:05.928060 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.942311 1751274 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I1001 23:44:05.945428 1751274 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1001 23:44:05.945456 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I1001 23:44:05.945525 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.950052 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:05.950418 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:05.950929 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:05.952543 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:05.953205 1751274 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 23:44:05.953220 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1001 23:44:05.953274 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.965094 1751274 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:44:05.965326 1751274 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1001 23:44:05.965515 1751274 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 23:44:05.965530 1751274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 23:44:05.965602 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.967225 1751274 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:44:05.967243 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 23:44:05.967295 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:05.988064 1751274 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 23:44:05.995004 1751274 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 23:44:05.995316 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:05.997019 1751274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 23:44:05.997560 1751274 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 23:44:05.997574 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 23:44:06.000652 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:06.008634 1751274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:44:06.024541 1751274 out.go:177]   - Using image docker.io/busybox:stable
	I1001 23:44:06.027316 1751274 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 23:44:06.034751 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:06.038384 1751274 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 23:44:06.038453 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 23:44:06.038547 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:06.042753 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:06.064873 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:06.067488 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:06.114691 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:06.116888 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:06.120765 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:06.127090 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	W1001 23:44:06.141298 1751274 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1001 23:44:06.141329 1751274 retry.go:31] will retry after 315.096017ms: ssh: handshake failed: EOF
	I1001 23:44:06.146222 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:06.684047 1751274 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 23:44:06.684077 1751274 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 23:44:06.688099 1751274 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 23:44:06.688166 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 23:44:06.697747 1751274 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 23:44:06.697769 1751274 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 23:44:06.746544 1751274 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1001 23:44:06.746566 1751274 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1001 23:44:06.775227 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:44:06.938749 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 23:44:06.941939 1751274 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 23:44:06.942010 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 23:44:06.975187 1751274 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 23:44:06.975256 1751274 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 23:44:07.016524 1751274 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 23:44:07.016593 1751274 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 23:44:07.030026 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 23:44:07.059748 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 23:44:07.066461 1751274 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 23:44:07.066491 1751274 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 23:44:07.083958 1751274 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 23:44:07.083986 1751274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 23:44:07.092905 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 23:44:07.102577 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 23:44:07.115267 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1001 23:44:07.118519 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 23:44:07.141003 1751274 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1001 23:44:07.141124 1751274 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1001 23:44:07.275745 1751274 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 23:44:07.275822 1751274 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 23:44:07.347078 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 23:44:07.359058 1751274 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 23:44:07.359132 1751274 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 23:44:07.396359 1751274 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 23:44:07.396438 1751274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 23:44:07.421984 1751274 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1001 23:44:07.422062 1751274 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1001 23:44:07.431821 1751274 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 23:44:07.431889 1751274 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 23:44:07.456152 1751274 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 23:44:07.456224 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 23:44:07.505631 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 23:44:07.578877 1751274 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 23:44:07.578957 1751274 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 23:44:07.640557 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 23:44:07.656676 1751274 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 23:44:07.656755 1751274 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 23:44:07.658803 1751274 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 23:44:07.658860 1751274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 23:44:07.761705 1751274 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1001 23:44:07.761778 1751274 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1001 23:44:07.831708 1751274 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 23:44:07.831778 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 23:44:07.884091 1751274 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 23:44:07.884171 1751274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 23:44:07.965323 1751274 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1001 23:44:07.965397 1751274 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1001 23:44:07.986685 1751274 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.977966574s)
	I1001 23:44:07.986860 1751274 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.989817953s)
	I1001 23:44:07.986927 1751274 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1001 23:44:07.988526 1751274 node_ready.go:35] waiting up to 6m0s for node "addons-515343" to be "Ready" ...
	I1001 23:44:07.993060 1751274 node_ready.go:49] node "addons-515343" has status "Ready":"True"
	I1001 23:44:07.993082 1751274 node_ready.go:38] duration metric: took 4.498976ms for node "addons-515343" to be "Ready" ...
	I1001 23:44:07.993094 1751274 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:44:08.011922 1751274 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dp6kf" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:08.126145 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 23:44:08.205860 1751274 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 23:44:08.205940 1751274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 23:44:08.480445 1751274 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1001 23:44:08.480527 1751274 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1001 23:44:08.492594 1751274 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-515343" context rescaled to 1 replicas
	I1001 23:44:08.571339 1751274 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 23:44:08.571421 1751274 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1001 23:44:08.598888 1751274 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 23:44:08.598962 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 23:44:08.605110 1751274 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 23:44:08.605178 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1001 23:44:08.665763 1751274 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 23:44:08.665842 1751274 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 23:44:08.691402 1751274 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 23:44:08.691483 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 23:44:09.030302 1751274 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-dp6kf" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-dp6kf" not found
	I1001 23:44:09.030378 1751274 pod_ready.go:82] duration metric: took 1.018377953s for pod "coredns-7c65d6cfc9-dp6kf" in "kube-system" namespace to be "Ready" ...
	E1001 23:44:09.030412 1751274 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-dp6kf" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-dp6kf" not found
	I1001 23:44:09.030438 1751274 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sf59g" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:09.041804 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.266492666s)
	I1001 23:44:09.053145 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 23:44:09.111698 1751274 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 23:44:09.111784 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 23:44:09.665002 1751274 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 23:44:09.665075 1751274 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 23:44:10.263815 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 23:44:11.050479 1751274 pod_ready.go:103] pod "coredns-7c65d6cfc9-sf59g" in "kube-system" namespace has status "Ready":"False"
	I1001 23:44:11.524289 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.585486183s)
	I1001 23:44:12.976086 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.946024817s)
	I1001 23:44:12.976159 1751274 addons.go:475] Verifying addon ingress=true in "addons-515343"
	I1001 23:44:12.976334 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.916558037s)
	I1001 23:44:12.976417 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.883489645s)
	I1001 23:44:12.976482 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.873884053s)
	I1001 23:44:12.978965 1751274 out.go:177] * Verifying ingress addon...
	I1001 23:44:12.981857 1751274 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 23:44:12.988700 1751274 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 23:44:12.988728 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:13.153183 1751274 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 23:44:13.153264 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:13.181073 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:13.422002 1751274 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 23:44:13.487746 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:13.539023 1751274 pod_ready.go:103] pod "coredns-7c65d6cfc9-sf59g" in "kube-system" namespace has status "Ready":"False"
	I1001 23:44:13.543556 1751274 addons.go:234] Setting addon gcp-auth=true in "addons-515343"
	I1001 23:44:13.543613 1751274 host.go:66] Checking if "addons-515343" exists ...
	I1001 23:44:13.544309 1751274 cli_runner.go:164] Run: docker container inspect addons-515343 --format={{.State.Status}}
	I1001 23:44:13.583109 1751274 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 23:44:13.583161 1751274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-515343
	I1001 23:44:13.607584 1751274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/addons-515343/id_rsa Username:docker}
	I1001 23:44:13.987298 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:14.492581 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:15.015270 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:15.505550 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:15.556206 1751274 pod_ready.go:103] pod "coredns-7c65d6cfc9-sf59g" in "kube-system" namespace has status "Ready":"False"
	I1001 23:44:15.806069 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.690720614s)
	I1001 23:44:15.806127 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.687539293s)
	I1001 23:44:15.806140 1751274 addons.go:475] Verifying addon registry=true in "addons-515343"
	I1001 23:44:15.806291 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.45913984s)
	I1001 23:44:15.806340 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.300640237s)
	I1001 23:44:15.806633 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.165989351s)
	I1001 23:44:15.806656 1751274 addons.go:475] Verifying addon metrics-server=true in "addons-515343"
	I1001 23:44:15.806715 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.680497602s)
	W1001 23:44:15.806750 1751274 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 23:44:15.806771 1751274 retry.go:31] will retry after 269.229937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 23:44:15.806848 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.753635165s)
	I1001 23:44:15.809448 1751274 out.go:177] * Verifying registry addon...
	I1001 23:44:15.811032 1751274 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-515343 service yakd-dashboard -n yakd-dashboard
	
	I1001 23:44:15.814391 1751274 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 23:44:15.849902 1751274 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 23:44:15.849930 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:16.034692 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:16.076617 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 23:44:16.262762 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.998858167s)
	I1001 23:44:16.262796 1751274 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-515343"
	I1001 23:44:16.262940 1751274 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.679811311s)
	I1001 23:44:16.266753 1751274 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 23:44:16.266812 1751274 out.go:177] * Verifying csi-hostpath-driver addon...
	I1001 23:44:16.269269 1751274 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1001 23:44:16.270132 1751274 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 23:44:16.272146 1751274 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 23:44:16.272173 1751274 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 23:44:16.339636 1751274 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 23:44:16.339662 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:16.348615 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:16.397146 1751274 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 23:44:16.397172 1751274 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 23:44:16.431964 1751274 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 23:44:16.431990 1751274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 23:44:16.451533 1751274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 23:44:16.488678 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:16.776305 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:16.818731 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:16.987168 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:17.318078 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:17.424332 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:17.515197 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:17.653057 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.576392007s)
	I1001 23:44:17.653173 1751274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.201611263s)
	I1001 23:44:17.656437 1751274 addons.go:475] Verifying addon gcp-auth=true in "addons-515343"
	I1001 23:44:17.658944 1751274 out.go:177] * Verifying gcp-auth addon...
	I1001 23:44:17.661932 1751274 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1001 23:44:17.664826 1751274 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 23:44:17.775396 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:17.818548 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:17.986596 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:18.036438 1751274 pod_ready.go:103] pod "coredns-7c65d6cfc9-sf59g" in "kube-system" namespace has status "Ready":"False"
	I1001 23:44:18.275231 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:18.374638 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:18.486935 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:18.774662 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:18.819190 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:18.987170 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:19.274911 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:19.318033 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:19.485966 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:19.775821 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:19.819437 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:19.990034 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:20.038691 1751274 pod_ready.go:103] pod "coredns-7c65d6cfc9-sf59g" in "kube-system" namespace has status "Ready":"False"
	I1001 23:44:20.275173 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:20.320050 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:20.487032 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:20.775246 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:20.818782 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:20.987194 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:21.037390 1751274 pod_ready.go:93] pod "coredns-7c65d6cfc9-sf59g" in "kube-system" namespace has status "Ready":"True"
	I1001 23:44:21.037458 1751274 pod_ready.go:82] duration metric: took 12.006991324s for pod "coredns-7c65d6cfc9-sf59g" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:21.037485 1751274 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-515343" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:21.042666 1751274 pod_ready.go:93] pod "etcd-addons-515343" in "kube-system" namespace has status "Ready":"True"
	I1001 23:44:21.042740 1751274 pod_ready.go:82] duration metric: took 5.232341ms for pod "etcd-addons-515343" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:21.042771 1751274 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-515343" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:21.048089 1751274 pod_ready.go:93] pod "kube-apiserver-addons-515343" in "kube-system" namespace has status "Ready":"True"
	I1001 23:44:21.048163 1751274 pod_ready.go:82] duration metric: took 5.369757ms for pod "kube-apiserver-addons-515343" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:21.048188 1751274 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-515343" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:21.054576 1751274 pod_ready.go:93] pod "kube-controller-manager-addons-515343" in "kube-system" namespace has status "Ready":"True"
	I1001 23:44:21.054651 1751274 pod_ready.go:82] duration metric: took 6.440689ms for pod "kube-controller-manager-addons-515343" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:21.054678 1751274 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sfvfg" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:21.060313 1751274 pod_ready.go:93] pod "kube-proxy-sfvfg" in "kube-system" namespace has status "Ready":"True"
	I1001 23:44:21.060387 1751274 pod_ready.go:82] duration metric: took 5.688453ms for pod "kube-proxy-sfvfg" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:21.060414 1751274 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-515343" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:21.275878 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:21.318522 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:21.434742 1751274 pod_ready.go:93] pod "kube-scheduler-addons-515343" in "kube-system" namespace has status "Ready":"True"
	I1001 23:44:21.434810 1751274 pod_ready.go:82] duration metric: took 374.376422ms for pod "kube-scheduler-addons-515343" in "kube-system" namespace to be "Ready" ...
	I1001 23:44:21.434834 1751274 pod_ready.go:39] duration metric: took 13.441727836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:44:21.434862 1751274 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:44:21.434955 1751274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:44:21.448103 1751274 api_server.go:72] duration metric: took 15.94410351s to wait for apiserver process to appear ...
	I1001 23:44:21.448177 1751274 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:44:21.448215 1751274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1001 23:44:21.457080 1751274 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1001 23:44:21.458276 1751274 api_server.go:141] control plane version: v1.31.1
	I1001 23:44:21.458296 1751274 api_server.go:131] duration metric: took 10.099596ms to wait for apiserver health ...
	I1001 23:44:21.458314 1751274 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:44:21.486452 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:21.641508 1751274 system_pods.go:59] 18 kube-system pods found
	I1001 23:44:21.641587 1751274 system_pods.go:61] "coredns-7c65d6cfc9-sf59g" [f14b09d6-d9ad-48b1-bf24-1e186c3b9a98] Running
	I1001 23:44:21.641610 1751274 system_pods.go:61] "csi-hostpath-attacher-0" [5a38357c-3fe5-4c64-b968-2c1e54ad76a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 23:44:21.641634 1751274 system_pods.go:61] "csi-hostpath-resizer-0" [6f4c65aa-b954-4064-adb0-ecf0ef3515a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 23:44:21.641673 1751274 system_pods.go:61] "csi-hostpathplugin-8h4np" [c1c436b7-a05b-4b37-9cd3-f9ae02e47026] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 23:44:21.641700 1751274 system_pods.go:61] "etcd-addons-515343" [f3a6cd31-66cd-486c-8459-2568036b3e02] Running
	I1001 23:44:21.641720 1751274 system_pods.go:61] "kindnet-hn8k9" [5e27e0d1-bd7f-453c-8b2b-c33941afe721] Running
	I1001 23:44:21.641739 1751274 system_pods.go:61] "kube-apiserver-addons-515343" [98991814-f19e-4fd6-9a1f-be5ca20388f2] Running
	I1001 23:44:21.641759 1751274 system_pods.go:61] "kube-controller-manager-addons-515343" [100ced96-91f9-4a61-9be2-c0d2c0604d87] Running
	I1001 23:44:21.641786 1751274 system_pods.go:61] "kube-ingress-dns-minikube" [46a1893f-9987-445c-997e-fd79e0b5e8cc] Running
	I1001 23:44:21.641809 1751274 system_pods.go:61] "kube-proxy-sfvfg" [817b7c1a-3ef7-47d3-9152-ebd3626b111a] Running
	I1001 23:44:21.641827 1751274 system_pods.go:61] "kube-scheduler-addons-515343" [65e15afa-5028-4fe4-8edd-82002393546a] Running
	I1001 23:44:21.641849 1751274 system_pods.go:61] "metrics-server-84c5f94fbc-42hls" [c4a49c1b-2064-4012-8dbe-0c11d66e402d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 23:44:21.641871 1751274 system_pods.go:61] "nvidia-device-plugin-daemonset-r5vkz" [7e24d387-21f7-488f-b9f1-64ae0a88f60c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 23:44:21.641909 1751274 system_pods.go:61] "registry-66c9cd494c-r4nd7" [af28ec25-e4aa-4c5c-a962-bdbc9c202d33] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 23:44:21.641932 1751274 system_pods.go:61] "registry-proxy-ghb94" [c982414a-f5f3-4738-a511-fe432f305818] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 23:44:21.641956 1751274 system_pods.go:61] "snapshot-controller-56fcc65765-2zjqz" [975e5350-6459-41db-a2dd-a2943ae7b3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 23:44:21.641988 1751274 system_pods.go:61] "snapshot-controller-56fcc65765-zccbp" [6bc7ef00-8336-4f44-9a42-6596fa3ea8de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 23:44:21.642015 1751274 system_pods.go:61] "storage-provisioner" [9d1eca7a-be9c-45ab-9529-13a60e6a98f9] Running
	I1001 23:44:21.642038 1751274 system_pods.go:74] duration metric: took 183.716569ms to wait for pod list to return data ...
	I1001 23:44:21.642059 1751274 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:44:21.775165 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:21.818461 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:21.834692 1751274 default_sa.go:45] found service account: "default"
	I1001 23:44:21.834769 1751274 default_sa.go:55] duration metric: took 192.687175ms for default service account to be created ...
	I1001 23:44:21.834793 1751274 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:44:21.986910 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:22.092115 1751274 system_pods.go:86] 18 kube-system pods found
	I1001 23:44:22.092196 1751274 system_pods.go:89] "coredns-7c65d6cfc9-sf59g" [f14b09d6-d9ad-48b1-bf24-1e186c3b9a98] Running
	I1001 23:44:22.092221 1751274 system_pods.go:89] "csi-hostpath-attacher-0" [5a38357c-3fe5-4c64-b968-2c1e54ad76a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 23:44:22.092266 1751274 system_pods.go:89] "csi-hostpath-resizer-0" [6f4c65aa-b954-4064-adb0-ecf0ef3515a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 23:44:22.092296 1751274 system_pods.go:89] "csi-hostpathplugin-8h4np" [c1c436b7-a05b-4b37-9cd3-f9ae02e47026] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 23:44:22.092315 1751274 system_pods.go:89] "etcd-addons-515343" [f3a6cd31-66cd-486c-8459-2568036b3e02] Running
	I1001 23:44:22.092340 1751274 system_pods.go:89] "kindnet-hn8k9" [5e27e0d1-bd7f-453c-8b2b-c33941afe721] Running
	I1001 23:44:22.092373 1751274 system_pods.go:89] "kube-apiserver-addons-515343" [98991814-f19e-4fd6-9a1f-be5ca20388f2] Running
	I1001 23:44:22.092396 1751274 system_pods.go:89] "kube-controller-manager-addons-515343" [100ced96-91f9-4a61-9be2-c0d2c0604d87] Running
	I1001 23:44:22.092416 1751274 system_pods.go:89] "kube-ingress-dns-minikube" [46a1893f-9987-445c-997e-fd79e0b5e8cc] Running
	I1001 23:44:22.092434 1751274 system_pods.go:89] "kube-proxy-sfvfg" [817b7c1a-3ef7-47d3-9152-ebd3626b111a] Running
	I1001 23:44:22.092586 1751274 system_pods.go:89] "kube-scheduler-addons-515343" [65e15afa-5028-4fe4-8edd-82002393546a] Running
	I1001 23:44:22.092615 1751274 system_pods.go:89] "metrics-server-84c5f94fbc-42hls" [c4a49c1b-2064-4012-8dbe-0c11d66e402d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 23:44:22.092636 1751274 system_pods.go:89] "nvidia-device-plugin-daemonset-r5vkz" [7e24d387-21f7-488f-b9f1-64ae0a88f60c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 23:44:22.092657 1751274 system_pods.go:89] "registry-66c9cd494c-r4nd7" [af28ec25-e4aa-4c5c-a962-bdbc9c202d33] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 23:44:22.092699 1751274 system_pods.go:89] "registry-proxy-ghb94" [c982414a-f5f3-4738-a511-fe432f305818] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 23:44:22.092725 1751274 system_pods.go:89] "snapshot-controller-56fcc65765-2zjqz" [975e5350-6459-41db-a2dd-a2943ae7b3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 23:44:22.092752 1751274 system_pods.go:89] "snapshot-controller-56fcc65765-zccbp" [6bc7ef00-8336-4f44-9a42-6596fa3ea8de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 23:44:22.092776 1751274 system_pods.go:89] "storage-provisioner" [9d1eca7a-be9c-45ab-9529-13a60e6a98f9] Running
	I1001 23:44:22.092809 1751274 system_pods.go:126] duration metric: took 257.997824ms to wait for k8s-apps to be running ...
	I1001 23:44:22.092836 1751274 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:44:22.092911 1751274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:44:22.108122 1751274 system_svc.go:56] duration metric: took 15.277514ms WaitForService to wait for kubelet
	I1001 23:44:22.108148 1751274 kubeadm.go:582] duration metric: took 16.604152951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:44:22.108167 1751274 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:44:22.235665 1751274 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1001 23:44:22.235704 1751274 node_conditions.go:123] node cpu capacity is 2
	I1001 23:44:22.235717 1751274 node_conditions.go:105] duration metric: took 127.54445ms to run NodePressure ...
	I1001 23:44:22.235729 1751274 start.go:241] waiting for startup goroutines ...
	I1001 23:44:22.295165 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:22.318672 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:22.486604 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:22.775162 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:22.822999 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:22.990446 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:23.276739 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:23.378088 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:23.486083 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:23.775713 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:23.818196 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:23.987207 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:24.274640 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:24.318005 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:24.486547 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:24.775659 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:24.818039 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:24.987658 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:25.274594 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:25.318051 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:25.486587 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:25.775504 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:25.817895 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:25.997745 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:26.275979 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:26.318272 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:26.487298 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:26.774451 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:26.817704 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:26.986778 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:27.276396 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:27.318073 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:27.487166 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:27.778738 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:27.818854 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:27.986664 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:28.275497 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:28.318859 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:28.492849 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:28.775588 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:28.818204 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:28.991605 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:29.276040 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:29.318461 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:29.487374 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:29.775905 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:29.819410 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 23:44:29.987224 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:30.274582 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:30.317817 1751274 kapi.go:107] duration metric: took 14.503421791s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 23:44:30.486556 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:30.774683 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:30.985606 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:31.278315 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:31.487309 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:31.779402 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:31.986807 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:32.275263 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:32.489770 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:32.776171 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:32.986511 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:33.274355 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:33.486705 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:33.775319 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:33.986441 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:34.275778 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:34.506433 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:34.774994 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:34.986030 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:35.286149 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:35.486704 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:35.775389 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:35.986311 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:36.275798 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:36.486488 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:36.774453 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:36.986844 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:37.275265 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:37.486382 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:37.775386 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:37.986738 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:38.275453 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:38.487068 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:38.776033 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:38.985812 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:39.275207 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:39.486388 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:39.775827 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:39.985831 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:40.275154 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:40.491627 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:40.775429 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:40.986085 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:41.276144 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:41.486370 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:41.774738 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:41.988887 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:42.277203 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:42.488777 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:42.777062 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:42.987593 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:43.274439 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:43.486015 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:43.774827 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:43.988096 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:44.275979 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:44.488067 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:44.775239 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:44.986203 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:45.276686 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:45.485683 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:45.775087 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:45.986204 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:46.274811 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:46.487197 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:46.775368 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:46.986120 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:47.275207 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:47.486457 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:47.776658 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:47.985731 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:48.275345 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:48.486808 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:48.779549 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:48.987749 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:49.275780 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:49.488296 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:49.777150 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:49.987332 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:50.276431 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:50.487285 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:50.774750 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:50.986579 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:51.274717 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:51.486163 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:51.775181 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:51.986276 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:52.275536 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:52.485859 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:52.775164 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:52.987979 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:53.275089 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:53.486600 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:53.776215 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:53.986116 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:54.275791 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:54.487214 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:54.775012 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:54.988329 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:55.276412 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:55.486645 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:55.776908 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:55.987276 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:56.274992 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:56.487545 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:56.775097 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:56.986292 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:57.275254 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:57.486359 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:57.774288 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:57.986461 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:58.274734 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:58.485905 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:58.774831 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:58.986683 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:59.275113 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:59.486129 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:44:59.774907 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:44:59.987051 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:00.275289 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:45:00.486612 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:00.780257 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:45:00.986458 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:01.275676 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:45:01.486265 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:01.775463 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:45:01.987152 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:02.274700 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:45:02.488887 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:02.775534 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:45:02.987206 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:03.275535 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:45:03.486647 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:03.775224 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:45:03.988711 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:04.275873 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:45:04.486837 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:04.776271 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 23:45:04.986350 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:05.275731 1751274 kapi.go:107] duration metric: took 49.00559571s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 23:45:05.486359 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:05.987184 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:06.485585 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:06.985885 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:07.485977 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:07.986176 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:08.485970 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:08.986114 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:09.486228 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:09.986688 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:10.486186 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:10.986629 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:11.486627 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:11.986509 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:12.486954 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:12.990955 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:13.486360 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:13.986010 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:14.487431 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:14.986643 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:15.486422 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:15.986902 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:16.486605 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:16.986717 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:17.486456 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:17.986669 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:18.492215 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:18.986782 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:19.486123 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:19.987060 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:20.485940 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:20.986416 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:21.487782 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:21.995703 1751274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 23:45:22.493968 1751274 kapi.go:107] duration metric: took 1m9.512108535s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 23:45:39.664883 1751274 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 23:45:39.664911 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:40.165571 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:40.668124 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:41.165583 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:41.668402 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:42.166051 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:42.668860 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:43.165926 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:43.666047 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:44.165895 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:44.666684 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:45.165715 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:45.665694 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:46.166179 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:46.666186 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:47.166001 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:47.665493 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:48.165069 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:48.665755 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:49.165854 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:49.665693 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:50.165893 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:50.665621 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:51.165836 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:51.665834 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:52.164941 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:52.667238 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:53.166134 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:53.665908 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:54.166366 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:54.666281 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:55.165621 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:55.665929 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:56.165479 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:56.665695 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:57.165440 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:57.665350 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:58.164952 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:58.665387 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:59.165351 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:45:59.665535 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:00.166590 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:00.666403 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:01.165636 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:01.666062 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:02.165774 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:02.665419 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:03.166115 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:03.665707 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:04.165472 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:04.665825 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:05.165897 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:05.666410 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:06.165972 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:06.665649 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:07.165600 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:07.665449 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:08.165641 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:08.665822 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:09.165419 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:09.665722 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:10.166162 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:10.665756 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:11.166252 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:11.666188 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:12.165832 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:12.666027 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:13.165967 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:13.665278 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:14.165052 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:14.665831 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:15.165903 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:15.664877 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:16.166336 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:16.666607 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:17.167544 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:17.667233 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:18.168504 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:18.665813 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:19.165450 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:19.665690 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:20.165835 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:20.668541 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:21.165866 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:21.665051 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:22.165093 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:22.667859 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:23.165447 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:23.665522 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:24.165384 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:24.666411 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:25.165559 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:25.664983 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:26.168714 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:26.665488 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:27.165213 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:27.666076 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:28.165184 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:28.666481 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:29.165402 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:29.666627 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:30.166306 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:30.665820 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:31.166233 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:31.666433 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:32.165855 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:32.665323 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:33.165546 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:33.665978 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:34.167458 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:34.665817 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:35.167028 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:35.664996 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:36.166098 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:36.666083 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:37.166003 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:37.665706 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:38.167017 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:38.665854 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:39.165647 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:39.665453 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:40.165726 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:40.671716 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:41.165747 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:41.678103 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:42.165599 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:42.665902 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:43.166075 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:43.665823 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:44.166377 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:44.666308 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:45.166477 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:45.667008 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:46.166506 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:46.666156 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:47.166429 1751274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 23:46:47.674668 1751274 kapi.go:107] duration metric: took 2m30.012733849s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 23:46:47.676900 1751274 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-515343 cluster.
	I1001 23:46:47.679789 1751274 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 23:46:47.682855 1751274 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 23:46:47.686020 1751274 out.go:177] * Enabled addons: storage-provisioner, storage-provisioner-rancher, nvidia-device-plugin, cloud-spanner, default-storageclass, volcano, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1001 23:46:47.688819 1751274 addons.go:510] duration metric: took 2m42.184571785s for enable addons: enabled=[storage-provisioner storage-provisioner-rancher nvidia-device-plugin cloud-spanner default-storageclass volcano ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1001 23:46:47.688866 1751274 start.go:246] waiting for cluster config update ...
	I1001 23:46:47.689747 1751274 start.go:255] writing updated cluster config ...
	I1001 23:46:47.690044 1751274 ssh_runner.go:195] Run: rm -f paused
	I1001 23:46:47.997124 1751274 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 23:46:47.999959 1751274 out.go:177] * Done! kubectl is now configured to use "addons-515343" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	9bd8d7e838bb5       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   1e2ac04d93690       gcp-auth-89d5ffd79-hgzrs
	7bb582697ad18       1a9605c872c1d       4 minutes ago       Running             admission                                0                   f509686715d20       volcano-admission-5874dfdd79-6lxlc
	29f1ecef85f80       289a818c8d9c5       4 minutes ago       Running             controller                               0                   d193ee7d071e2       ingress-nginx-controller-bc57996ff-fctg2
	d626b345d9636       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   2c99f0c279616       csi-hostpathplugin-8h4np
	b7383f7b1b01e       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   2c99f0c279616       csi-hostpathplugin-8h4np
	363780c38b3ab       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   2c99f0c279616       csi-hostpathplugin-8h4np
	7788feae78955       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   2c99f0c279616       csi-hostpathplugin-8h4np
	25656e514beb8       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   2c99f0c279616       csi-hostpathplugin-8h4np
	82ab3727ba234       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   2c99f0c279616       csi-hostpathplugin-8h4np
	3bb0eaaa37281       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   6238247caa966       csi-hostpath-attacher-0
	0fcd111b47e96       23cbb28ae641a       5 minutes ago       Running             volcano-controllers                      0                   c32180ba74502       volcano-controllers-789ffc5785-xd4mg
	b0f67a1768b43       6aa88c604f2b4       5 minutes ago       Running             volcano-scheduler                        0                   5c55bbc511234       volcano-scheduler-6c9778cbdf-lxhq9
	27b46a15d56c5       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   8c0ec3ea11ccd       snapshot-controller-56fcc65765-2zjqz
	935dfc8e9f3be       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   946291d0c6559       csi-hostpath-resizer-0
	7cb1493789e6f       420193b27261a       5 minutes ago       Exited              patch                                    0                   c291d4ff57088       ingress-nginx-admission-patch-k8sz2
	67021b4388088       420193b27261a       5 minutes ago       Exited              create                                   0                   1c3b063f95098       ingress-nginx-admission-create-rhv6q
	8b70d37fda0e4       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   1b8e7b6a24937       snapshot-controller-56fcc65765-zccbp
	b7366bf24a150       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   20d7b8d82fd2f       metrics-server-84c5f94fbc-42hls
	53b0f519a8c47       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   c1391f884e778       cloud-spanner-emulator-5b584cc74-wnkhg
	ad771d20bdde0       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   d012aa7e453ed       local-path-provisioner-86d989889c-zhmq8
	09e2c6195f4c3       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   c90c56b99a003       nvidia-device-plugin-daemonset-r5vkz
	d5a32704c3107       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   ee6f141db028b       registry-66c9cd494c-r4nd7
	a8e961860cb16       77bdba588b953       5 minutes ago       Running             yakd                                     0                   c05dbc092507e       yakd-dashboard-67d98fc6b-v9nc2
	7f484b575277c       f7ed138f698f6       5 minutes ago       Running             registry-proxy                           0                   d4842fae6a612       registry-proxy-ghb94
	5934a0ca5e051       4f725bf50aaa5       5 minutes ago       Running             gadget                                   0                   6fbef845d7f2a       gadget-dtsn2
	8019e0aee236e       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   c349ae61a1b6a       coredns-7c65d6cfc9-sf59g
	a1390d29937e7       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   aecd72f91170a       kube-ingress-dns-minikube
	2c42f84c6afa5       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   03129d40957d8       storage-provisioner
	1593e06b5f25c       24a140c548c07       5 minutes ago       Running             kube-proxy                               0                   3569a099be850       kube-proxy-sfvfg
	ab65dd597c2a8       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   769f9fca14dca       kindnet-hn8k9
	d82742fc7ffc3       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   d7200cec64d34       kube-scheduler-addons-515343
	ded0895280a8b       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   fcccc1d3a2050       kube-apiserver-addons-515343
	dced1a9572990       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   ad52f242ad03c       kube-controller-manager-addons-515343
	e2f3c87f56708       27e3830e14027       6 minutes ago       Running             etcd                                     0                   4dda15e17f646       etcd-addons-515343
	
	
	==> containerd <==
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.790644893Z" level=info msg="TearDown network for sandbox \"4659c71dbef74bc1ef6141dce2c3b91dd0e88c903731c7ef7cceafdaf4d19883\" successfully"
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.790684539Z" level=info msg="StopPodSandbox for \"4659c71dbef74bc1ef6141dce2c3b91dd0e88c903731c7ef7cceafdaf4d19883\" returns successfully"
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.791344618Z" level=info msg="RemovePodSandbox for \"4659c71dbef74bc1ef6141dce2c3b91dd0e88c903731c7ef7cceafdaf4d19883\""
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.791468012Z" level=info msg="Forcibly stopping sandbox \"4659c71dbef74bc1ef6141dce2c3b91dd0e88c903731c7ef7cceafdaf4d19883\""
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.799136836Z" level=info msg="TearDown network for sandbox \"4659c71dbef74bc1ef6141dce2c3b91dd0e88c903731c7ef7cceafdaf4d19883\" successfully"
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.806019020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4659c71dbef74bc1ef6141dce2c3b91dd0e88c903731c7ef7cceafdaf4d19883\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.806186860Z" level=info msg="RemovePodSandbox \"4659c71dbef74bc1ef6141dce2c3b91dd0e88c903731c7ef7cceafdaf4d19883\" returns successfully"
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.809006215Z" level=info msg="StopPodSandbox for \"dc74219d330a51ced64ef59f3c5b8b92e8dab7035a46512f9b55aa0267fb710c\""
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.820274749Z" level=info msg="TearDown network for sandbox \"dc74219d330a51ced64ef59f3c5b8b92e8dab7035a46512f9b55aa0267fb710c\" successfully"
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.820441818Z" level=info msg="StopPodSandbox for \"dc74219d330a51ced64ef59f3c5b8b92e8dab7035a46512f9b55aa0267fb710c\" returns successfully"
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.821064384Z" level=info msg="RemovePodSandbox for \"dc74219d330a51ced64ef59f3c5b8b92e8dab7035a46512f9b55aa0267fb710c\""
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.821103751Z" level=info msg="Forcibly stopping sandbox \"dc74219d330a51ced64ef59f3c5b8b92e8dab7035a46512f9b55aa0267fb710c\""
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.829550805Z" level=info msg="TearDown network for sandbox \"dc74219d330a51ced64ef59f3c5b8b92e8dab7035a46512f9b55aa0267fb710c\" successfully"
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.835362483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc74219d330a51ced64ef59f3c5b8b92e8dab7035a46512f9b55aa0267fb710c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 01 23:47:00 addons-515343 containerd[817]: time="2024-10-01T23:47:00.835484179Z" level=info msg="RemovePodSandbox \"dc74219d330a51ced64ef59f3c5b8b92e8dab7035a46512f9b55aa0267fb710c\" returns successfully"
	Oct 01 23:48:00 addons-515343 containerd[817]: time="2024-10-01T23:48:00.839637504Z" level=info msg="RemoveContainer for \"dd87a2a3e022d5e704b1e7bdca2fa408cb7d4d0ab86f6786d3fdb09725e8aef9\""
	Oct 01 23:48:00 addons-515343 containerd[817]: time="2024-10-01T23:48:00.847160954Z" level=info msg="RemoveContainer for \"dd87a2a3e022d5e704b1e7bdca2fa408cb7d4d0ab86f6786d3fdb09725e8aef9\" returns successfully"
	Oct 01 23:48:00 addons-515343 containerd[817]: time="2024-10-01T23:48:00.849202400Z" level=info msg="StopPodSandbox for \"4dcc26ada2a192c3c97daaf0ec4279d1df2f90907a451e7bb61a757794d1e401\""
	Oct 01 23:48:00 addons-515343 containerd[817]: time="2024-10-01T23:48:00.856888694Z" level=info msg="TearDown network for sandbox \"4dcc26ada2a192c3c97daaf0ec4279d1df2f90907a451e7bb61a757794d1e401\" successfully"
	Oct 01 23:48:00 addons-515343 containerd[817]: time="2024-10-01T23:48:00.856930342Z" level=info msg="StopPodSandbox for \"4dcc26ada2a192c3c97daaf0ec4279d1df2f90907a451e7bb61a757794d1e401\" returns successfully"
	Oct 01 23:48:00 addons-515343 containerd[817]: time="2024-10-01T23:48:00.857461751Z" level=info msg="RemovePodSandbox for \"4dcc26ada2a192c3c97daaf0ec4279d1df2f90907a451e7bb61a757794d1e401\""
	Oct 01 23:48:00 addons-515343 containerd[817]: time="2024-10-01T23:48:00.857500199Z" level=info msg="Forcibly stopping sandbox \"4dcc26ada2a192c3c97daaf0ec4279d1df2f90907a451e7bb61a757794d1e401\""
	Oct 01 23:48:00 addons-515343 containerd[817]: time="2024-10-01T23:48:00.867253054Z" level=info msg="TearDown network for sandbox \"4dcc26ada2a192c3c97daaf0ec4279d1df2f90907a451e7bb61a757794d1e401\" successfully"
	Oct 01 23:48:00 addons-515343 containerd[817]: time="2024-10-01T23:48:00.875050057Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4dcc26ada2a192c3c97daaf0ec4279d1df2f90907a451e7bb61a757794d1e401\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 01 23:48:00 addons-515343 containerd[817]: time="2024-10-01T23:48:00.875196171Z" level=info msg="RemovePodSandbox \"4dcc26ada2a192c3c97daaf0ec4279d1df2f90907a451e7bb61a757794d1e401\" returns successfully"
	
	
	==> coredns [8019e0aee236ea5741cf6aab690fdf0f87369249e8d5b3849ac475d350918faf] <==
	[INFO] 10.244.0.3:55275 - 35135 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000222116s
	[INFO] 10.244.0.3:55275 - 13355 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001559921s
	[INFO] 10.244.0.3:55275 - 7306 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001688173s
	[INFO] 10.244.0.3:55275 - 12315 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000092847s
	[INFO] 10.244.0.3:55275 - 41444 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00019938s
	[INFO] 10.244.0.3:57565 - 51501 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000120161s
	[INFO] 10.244.0.3:57565 - 51288 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042272s
	[INFO] 10.244.0.3:41044 - 1401 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061857s
	[INFO] 10.244.0.3:41044 - 1179 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066599s
	[INFO] 10.244.0.3:58171 - 57371 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058657s
	[INFO] 10.244.0.3:58171 - 57799 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000162162s
	[INFO] 10.244.0.3:60712 - 26742 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001426509s
	[INFO] 10.244.0.3:60712 - 26939 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001651857s
	[INFO] 10.244.0.3:51970 - 51975 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083345s
	[INFO] 10.244.0.3:51970 - 51761 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137991s
	[INFO] 10.244.0.24:38839 - 57630 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195548s
	[INFO] 10.244.0.24:38505 - 54644 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00026776s
	[INFO] 10.244.0.24:39548 - 50120 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155722s
	[INFO] 10.244.0.24:36760 - 42383 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120555s
	[INFO] 10.244.0.24:54753 - 25855 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109758s
	[INFO] 10.244.0.24:35396 - 50508 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012209s
	[INFO] 10.244.0.24:38041 - 27921 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002330226s
	[INFO] 10.244.0.24:33608 - 46582 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002766909s
	[INFO] 10.244.0.24:47767 - 46459 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002185335s
	[INFO] 10.244.0.24:54228 - 15114 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002222799s
	
	
	==> describe nodes <==
	Name:               addons-515343
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-515343
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=addons-515343
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T23_44_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-515343
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-515343"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:43:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-515343
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:50:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:47:04 +0000   Tue, 01 Oct 2024 23:43:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:47:04 +0000   Tue, 01 Oct 2024 23:43:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:47:04 +0000   Tue, 01 Oct 2024 23:43:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:47:04 +0000   Tue, 01 Oct 2024 23:43:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-515343
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 44686abd29454fb6b59212066f58e9a3
	  System UUID:                e047de11-10f0-459c-840b-7bc25efb7088
	  Boot ID:                    3aa8f718-8507-41e8-80ca-0eb33f6ce70e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-wnkhg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gadget                      gadget-dtsn2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  gcp-auth                    gcp-auth-89d5ffd79-hgzrs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-fctg2    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m54s
	  kube-system                 coredns-7c65d6cfc9-sf59g                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 csi-hostpathplugin-8h4np                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 etcd-addons-515343                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m5s
	  kube-system                 kindnet-hn8k9                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m1s
	  kube-system                 kube-apiserver-addons-515343                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-controller-manager-addons-515343       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 kube-proxy-sfvfg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 kube-scheduler-addons-515343                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 metrics-server-84c5f94fbc-42hls             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m56s
	  kube-system                 nvidia-device-plugin-daemonset-r5vkz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-66c9cd494c-r4nd7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 registry-proxy-ghb94                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-2zjqz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 snapshot-controller-56fcc65765-zccbp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-zhmq8     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  volcano-system              volcano-admission-5874dfdd79-6lxlc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  volcano-system              volcano-controllers-789ffc5785-xd4mg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  volcano-system              volcano-scheduler-6c9778cbdf-lxhq9          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-v9nc2              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m59s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  6m13s (x8 over 6m13s)  kubelet          Node addons-515343 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m13s (x7 over 6m13s)  kubelet          Node addons-515343 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m13s (x7 over 6m13s)  kubelet          Node addons-515343 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m6s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m6s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m6s                   kubelet          Node addons-515343 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m6s                   kubelet          Node addons-515343 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m6s                   kubelet          Node addons-515343 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m2s                   node-controller  Node addons-515343 event: Registered Node addons-515343 in Controller
	
	
	==> dmesg <==
	[Oct 1 23:13] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.010864] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.140336] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [e2f3c87f567082e4351ac910587bdd95e4f36d2fe39f299e52c5c4bf932977f8] <==
	{"level":"info","ts":"2024-10-01T23:43:53.726062Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-01T23:43:53.726076Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-01T23:43:53.727043Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-10-01T23:43:53.727277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-10-01T23:43:53.727356Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-10-01T23:43:53.817513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-01T23:43:53.817734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-01T23:43:53.817827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-10-01T23:43:53.817922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-10-01T23:43:53.818005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-01T23:43:53.818090Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-10-01T23:43:53.818170Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-01T23:43:53.822161Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-515343 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T23:43:53.822287Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T23:43:53.822603Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:43:53.823622Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T23:43:53.825214Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T23:43:53.826282Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-01T23:43:53.826382Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:43:53.826499Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:43:53.826521Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:43:53.833150Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T23:43:53.833239Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T23:43:53.833252Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T23:43:53.836530Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [9bd8d7e838bb5723a43e4bd5d5bd1222b6b177890047e4423fdb5c056b365112] <==
	2024/10/01 23:46:46 GCP Auth Webhook started!
	2024/10/01 23:47:04 Ready to marshal response ...
	2024/10/01 23:47:04 Ready to write response ...
	2024/10/01 23:47:05 Ready to marshal response ...
	2024/10/01 23:47:05 Ready to write response ...
	
	
	==> kernel <==
	 23:50:06 up  7:32,  0 users,  load average: 0.34, 1.19, 2.44
	Linux addons-515343 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ab65dd597c2a876af1a933795481a2849037d8a7e17946adae0e7a53651033d3] <==
	I1001 23:47:57.337681       1 main.go:299] handling current node
	I1001 23:48:07.328645       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:48:07.328696       1 main.go:299] handling current node
	I1001 23:48:17.336044       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:48:17.336204       1 main.go:299] handling current node
	I1001 23:48:27.328556       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:48:27.328602       1 main.go:299] handling current node
	I1001 23:48:37.328349       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:48:37.328580       1 main.go:299] handling current node
	I1001 23:48:47.329912       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:48:47.329957       1 main.go:299] handling current node
	I1001 23:48:57.337323       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:48:57.337356       1 main.go:299] handling current node
	I1001 23:49:07.328665       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:49:07.328706       1 main.go:299] handling current node
	I1001 23:49:17.335113       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:49:17.335146       1 main.go:299] handling current node
	I1001 23:49:27.334437       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:49:27.334471       1 main.go:299] handling current node
	I1001 23:49:37.328701       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:49:37.328751       1 main.go:299] handling current node
	I1001 23:49:47.334390       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:49:47.334426       1 main.go:299] handling current node
	I1001 23:49:57.337075       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 23:49:57.337127       1 main.go:299] handling current node
	
	
	==> kube-apiserver [ded0895280a8b6ddf1b4822312b28776ba5fd75f10ad0f77453ec27460b538c8] <==
	W1001 23:45:18.637150       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:19.672230       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:20.503220       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.1.198:443: connect: connection refused
	E1001 23:45:20.503270       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.1.198:443: connect: connection refused" logger="UnhandledError"
	W1001 23:45:20.505083       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:20.575091       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.1.198:443: connect: connection refused
	E1001 23:45:20.575130       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.1.198:443: connect: connection refused" logger="UnhandledError"
	W1001 23:45:20.576827       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:20.708020       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:21.762421       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:22.854348       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:23.903583       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:24.977251       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:26.037080       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:27.086976       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:28.174683       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:29.218072       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.106.178:443: connect: connection refused
	W1001 23:45:39.474889       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.1.198:443: connect: connection refused
	E1001 23:45:39.474929       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.1.198:443: connect: connection refused" logger="UnhandledError"
	W1001 23:46:20.514233       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.1.198:443: connect: connection refused
	E1001 23:46:20.514270       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.1.198:443: connect: connection refused" logger="UnhandledError"
	W1001 23:46:20.582629       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.1.198:443: connect: connection refused
	E1001 23:46:20.582668       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.1.198:443: connect: connection refused" logger="UnhandledError"
	I1001 23:47:04.530002       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I1001 23:47:04.568147       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [dced1a95729907e16c0dc121c0891ad68938be1daeeb26ebd00b273efcdb6d14] <==
	I1001 23:46:20.535913       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 23:46:20.538527       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 23:46:20.552915       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 23:46:20.590593       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 23:46:20.602590       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 23:46:20.609302       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 23:46:20.620876       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 23:46:21.448818       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 23:46:21.463884       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 23:46:22.588780       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 23:46:22.604290       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 23:46:23.595206       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 23:46:23.616103       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 23:46:23.627299       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 23:46:23.632939       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 23:46:23.641447       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 23:46:23.645130       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 23:46:47.566461       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="14.407372ms"
	I1001 23:46:47.567277       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="44.536µs"
	I1001 23:46:53.020880       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1001 23:46:53.026196       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1001 23:46:53.073411       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1001 23:46:53.080740       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1001 23:47:04.261892       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I1001 23:47:04.593440       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-515343"
	
	
	==> kube-proxy [1593e06b5f25cf24ae98b93fbe39a7010912a340ceba15662c6c9f6d75892acc] <==
	I1001 23:44:06.894087       1 server_linux.go:66] "Using iptables proxy"
	I1001 23:44:06.997099       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1001 23:44:06.997165       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 23:44:07.043063       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1001 23:44:07.043128       1 server_linux.go:169] "Using iptables Proxier"
	I1001 23:44:07.047724       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 23:44:07.049211       1 server.go:483] "Version info" version="v1.31.1"
	I1001 23:44:07.049244       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:44:07.061448       1 config.go:199] "Starting service config controller"
	I1001 23:44:07.061491       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 23:44:07.061516       1 config.go:105] "Starting endpoint slice config controller"
	I1001 23:44:07.061527       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 23:44:07.064206       1 config.go:328] "Starting node config controller"
	I1001 23:44:07.064233       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 23:44:07.162780       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 23:44:07.162852       1 shared_informer.go:320] Caches are synced for service config
	I1001 23:44:07.164387       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d82742fc7ffc34ba5c30020174e01fdacba6a114feeaa4a3eb11441fa930a7e1] <==
	W1001 23:43:58.253310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 23:43:58.253342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 23:43:58.253414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 23:43:58.253441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:43:58.253515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 23:43:58.253533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:43:58.253621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1001 23:43:58.253637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:43:58.253675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 23:43:58.253692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:43:58.253787       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 23:43:58.253807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:43:58.253855       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 23:43:58.253875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 23:43:58.253964       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 23:43:58.253982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:43:58.254057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 23:43:58.254074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:43:59.088838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 23:43:59.088884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:43:59.200151       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 23:43:59.200389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 23:43:59.333107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1001 23:43:59.333363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 23:43:59.841338       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 23:46:20 addons-515343 kubelet[1499]: I1001 23:46:20.748738    1499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2vwc\" (UniqueName: \"kubernetes.io/projected/8f0c5a12-b81e-4c73-9f99-65e16209cb5f-kube-api-access-k2vwc\") pod \"gcp-auth-certs-patch-zvtdf\" (UID: \"8f0c5a12-b81e-4c73-9f99-65e16209cb5f\") " pod="gcp-auth/gcp-auth-certs-patch-zvtdf"
	Oct 01 23:46:22 addons-515343 kubelet[1499]: I1001 23:46:22.663359    1499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qddp\" (UniqueName: \"kubernetes.io/projected/bd3442db-128d-4855-a63d-7921f853d206-kube-api-access-5qddp\") pod \"bd3442db-128d-4855-a63d-7921f853d206\" (UID: \"bd3442db-128d-4855-a63d-7921f853d206\") "
	Oct 01 23:46:22 addons-515343 kubelet[1499]: I1001 23:46:22.663440    1499 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2vwc\" (UniqueName: \"kubernetes.io/projected/8f0c5a12-b81e-4c73-9f99-65e16209cb5f-kube-api-access-k2vwc\") pod \"8f0c5a12-b81e-4c73-9f99-65e16209cb5f\" (UID: \"8f0c5a12-b81e-4c73-9f99-65e16209cb5f\") "
	Oct 01 23:46:22 addons-515343 kubelet[1499]: I1001 23:46:22.666156    1499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f0c5a12-b81e-4c73-9f99-65e16209cb5f-kube-api-access-k2vwc" (OuterVolumeSpecName: "kube-api-access-k2vwc") pod "8f0c5a12-b81e-4c73-9f99-65e16209cb5f" (UID: "8f0c5a12-b81e-4c73-9f99-65e16209cb5f"). InnerVolumeSpecName "kube-api-access-k2vwc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 01 23:46:22 addons-515343 kubelet[1499]: I1001 23:46:22.666426    1499 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3442db-128d-4855-a63d-7921f853d206-kube-api-access-5qddp" (OuterVolumeSpecName: "kube-api-access-5qddp") pod "bd3442db-128d-4855-a63d-7921f853d206" (UID: "bd3442db-128d-4855-a63d-7921f853d206"). InnerVolumeSpecName "kube-api-access-5qddp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 01 23:46:22 addons-515343 kubelet[1499]: I1001 23:46:22.764893    1499 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5qddp\" (UniqueName: \"kubernetes.io/projected/bd3442db-128d-4855-a63d-7921f853d206-kube-api-access-5qddp\") on node \"addons-515343\" DevicePath \"\""
	Oct 01 23:46:22 addons-515343 kubelet[1499]: I1001 23:46:22.764924    1499 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k2vwc\" (UniqueName: \"kubernetes.io/projected/8f0c5a12-b81e-4c73-9f99-65e16209cb5f-kube-api-access-k2vwc\") on node \"addons-515343\" DevicePath \"\""
	Oct 01 23:46:23 addons-515343 kubelet[1499]: I1001 23:46:23.449145    1499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc74219d330a51ced64ef59f3c5b8b92e8dab7035a46512f9b55aa0267fb710c"
	Oct 01 23:46:23 addons-515343 kubelet[1499]: I1001 23:46:23.461753    1499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4659c71dbef74bc1ef6141dce2c3b91dd0e88c903731c7ef7cceafdaf4d19883"
	Oct 01 23:46:47 addons-515343 kubelet[1499]: I1001 23:46:47.553239    1499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-89d5ffd79-hgzrs" podStartSLOduration=65.605515258 podStartE2EDuration="1m8.553222442s" podCreationTimestamp="2024-10-01 23:45:39 +0000 UTC" firstStartedPulling="2024-10-01 23:46:43.836097019 +0000 UTC m=+163.249157302" lastFinishedPulling="2024-10-01 23:46:46.783804202 +0000 UTC m=+166.196864486" observedRunningTime="2024-10-01 23:46:47.551024775 +0000 UTC m=+166.964085059" watchObservedRunningTime="2024-10-01 23:46:47.553222442 +0000 UTC m=+166.966282734"
	Oct 01 23:46:54 addons-515343 kubelet[1499]: I1001 23:46:54.720163    1499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f0c5a12-b81e-4c73-9f99-65e16209cb5f" path="/var/lib/kubelet/pods/8f0c5a12-b81e-4c73-9f99-65e16209cb5f/volumes"
	Oct 01 23:46:54 addons-515343 kubelet[1499]: I1001 23:46:54.721093    1499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd3442db-128d-4855-a63d-7921f853d206" path="/var/lib/kubelet/pods/bd3442db-128d-4855-a63d-7921f853d206/volumes"
	Oct 01 23:47:00 addons-515343 kubelet[1499]: I1001 23:47:00.760269    1499 scope.go:117] "RemoveContainer" containerID="b4d141487dd841e7f6b9c106a33231f1d65954336d425975282c629ec54a7591"
	Oct 01 23:47:00 addons-515343 kubelet[1499]: I1001 23:47:00.771398    1499 scope.go:117] "RemoveContainer" containerID="956aac53dba7bb69f04d7018696e20e172fe1ad126fcaeb91f16db9171f0e3f9"
	Oct 01 23:47:01 addons-515343 kubelet[1499]: I1001 23:47:01.717184    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-r4nd7" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 23:47:03 addons-515343 kubelet[1499]: I1001 23:47:03.716672    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-ghb94" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 23:47:04 addons-515343 kubelet[1499]: I1001 23:47:04.720137    1499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f8e6da-48fc-4e0f-8892-ced161cf3c5c" path="/var/lib/kubelet/pods/f4f8e6da-48fc-4e0f-8892-ced161cf3c5c/volumes"
	Oct 01 23:47:14 addons-515343 kubelet[1499]: I1001 23:47:14.716672    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-r5vkz" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 23:48:00 addons-515343 kubelet[1499]: I1001 23:48:00.838147    1499 scope.go:117] "RemoveContainer" containerID="dd87a2a3e022d5e704b1e7bdca2fa408cb7d4d0ab86f6786d3fdb09725e8aef9"
	Oct 01 23:48:13 addons-515343 kubelet[1499]: I1001 23:48:13.716887    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-r4nd7" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 23:48:20 addons-515343 kubelet[1499]: I1001 23:48:20.717712    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-ghb94" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 23:48:41 addons-515343 kubelet[1499]: I1001 23:48:41.717008    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-r5vkz" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 23:49:35 addons-515343 kubelet[1499]: I1001 23:49:35.717451    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-r4nd7" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 23:49:47 addons-515343 kubelet[1499]: I1001 23:49:47.716498    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-r5vkz" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 23:49:48 addons-515343 kubelet[1499]: I1001 23:49:48.716669    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-ghb94" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [2c42f84c6afa5f7c81ef9e9b93dbb1fcf1cd3a5337931f8a1a897ebd6aaff786] <==
	I1001 23:44:09.973586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 23:44:09.988395       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 23:44:09.988447       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 23:44:10.000895       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 23:44:10.001040       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-515343_d3289913-fcd9-41bd-8a72-12d8d03b30d0!
	I1001 23:44:10.001938       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3179b89-6788-451b-9828-f46f86957216", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-515343_d3289913-fcd9-41bd-8a72-12d8d03b30d0 became leader
	I1001 23:44:10.103736       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-515343_d3289913-fcd9-41bd-8a72-12d8d03b30d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-515343 -n addons-515343
helpers_test.go:261: (dbg) Run:  kubectl --context addons-515343 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-rhv6q ingress-nginx-admission-patch-k8sz2 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-515343 describe pod ingress-nginx-admission-create-rhv6q ingress-nginx-admission-patch-k8sz2 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-515343 describe pod ingress-nginx-admission-create-rhv6q ingress-nginx-admission-patch-k8sz2 test-job-nginx-0: exit status 1 (124.635063ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rhv6q" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-k8sz2" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-515343 describe pod ingress-nginx-admission-create-rhv6q ingress-nginx-admission-patch-k8sz2 test-job-nginx-0: exit status 1
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable volcano --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-515343 addons disable volcano --alsologtostderr -v=1: (11.255902607s)
--- FAIL: TestAddons/serial/Volcano (211.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (372.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-920941 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-920941 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m8.702360762s)

                                                
                                                
-- stdout --
	* [old-k8s-version-920941] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-920941" primary control-plane node in "old-k8s-version-920941" cluster
	* Pulling base image v0.0.45-1727731891-master ...
	* Restarting existing docker container for "old-k8s-version-920941" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-920941 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:34:29.359696 1957595 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:34:29.359904 1957595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:34:29.359930 1957595 out.go:358] Setting ErrFile to fd 2...
	I1002 00:34:29.359949 1957595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:34:29.360208 1957595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	I1002 00:34:29.360675 1957595 out.go:352] Setting JSON to false
	I1002 00:34:29.361566 1957595 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":29817,"bootTime":1727799453,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 00:34:29.361657 1957595 start.go:139] virtualization:  
	I1002 00:34:29.364807 1957595 out.go:177] * [old-k8s-version-920941] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1002 00:34:29.367964 1957595 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:34:29.368029 1957595 notify.go:220] Checking for updates...
	I1002 00:34:29.376805 1957595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:34:29.380511 1957595 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1002 00:34:29.383742 1957595 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	I1002 00:34:29.387837 1957595 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 00:34:29.391063 1957595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:34:29.394099 1957595 config.go:182] Loaded profile config "old-k8s-version-920941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1002 00:34:29.397619 1957595 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1002 00:34:29.400172 1957595 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:34:29.438867 1957595 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1002 00:34:29.438989 1957595 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:34:29.523070 1957595 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:43 SystemTime:2024-10-02 00:34:29.504929527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:34:29.523179 1957595 docker.go:318] overlay module found
	I1002 00:34:29.527441 1957595 out.go:177] * Using the docker driver based on existing profile
	I1002 00:34:29.530634 1957595 start.go:297] selected driver: docker
	I1002 00:34:29.530655 1957595 start.go:901] validating driver "docker" against &{Name:old-k8s-version-920941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-920941 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:34:29.530776 1957595 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:34:29.531384 1957595 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:34:29.623235 1957595 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:43 SystemTime:2024-10-02 00:34:29.609070889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:34:29.623649 1957595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:34:29.623674 1957595 cni.go:84] Creating CNI manager for ""
	I1002 00:34:29.623720 1957595 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 00:34:29.623758 1957595 start.go:340] cluster config:
	{Name:old-k8s-version-920941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-920941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:34:29.630852 1957595 out.go:177] * Starting "old-k8s-version-920941" primary control-plane node in "old-k8s-version-920941" cluster
	I1002 00:34:29.633079 1957595 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1002 00:34:29.634942 1957595 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1002 00:34:29.637414 1957595 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1002 00:34:29.637475 1957595 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1002 00:34:29.637486 1957595 cache.go:56] Caching tarball of preloaded images
	I1002 00:34:29.637579 1957595 preload.go:172] Found /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 00:34:29.637594 1957595 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1002 00:34:29.637721 1957595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/config.json ...
	I1002 00:34:29.637933 1957595 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1002 00:34:29.660549 1957595 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1002 00:34:29.660582 1957595 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1002 00:34:29.660601 1957595 cache.go:194] Successfully downloaded all kic artifacts
	I1002 00:34:29.660630 1957595 start.go:360] acquireMachinesLock for old-k8s-version-920941: {Name:mkc21fdd2a9657931b17eac94b6039e94c8c11d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:34:29.660698 1957595 start.go:364] duration metric: took 44.889µs to acquireMachinesLock for "old-k8s-version-920941"
	I1002 00:34:29.660721 1957595 start.go:96] Skipping create...Using existing machine configuration
	I1002 00:34:29.660730 1957595 fix.go:54] fixHost starting: 
	I1002 00:34:29.660995 1957595 cli_runner.go:164] Run: docker container inspect old-k8s-version-920941 --format={{.State.Status}}
	I1002 00:34:29.678222 1957595 fix.go:112] recreateIfNeeded on old-k8s-version-920941: state=Stopped err=<nil>
	W1002 00:34:29.678252 1957595 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 00:34:29.682877 1957595 out.go:177] * Restarting existing docker container for "old-k8s-version-920941" ...
	I1002 00:34:29.685662 1957595 cli_runner.go:164] Run: docker start old-k8s-version-920941
	I1002 00:34:30.089642 1957595 cli_runner.go:164] Run: docker container inspect old-k8s-version-920941 --format={{.State.Status}}
	I1002 00:34:30.116480 1957595 kic.go:430] container "old-k8s-version-920941" state is running.
	I1002 00:34:30.116897 1957595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-920941
	I1002 00:34:30.177730 1957595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/config.json ...
	I1002 00:34:30.177963 1957595 machine.go:93] provisionDockerMachine start ...
	I1002 00:34:30.178031 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:30.222854 1957595 main.go:141] libmachine: Using SSH client type: native
	I1002 00:34:30.223421 1957595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34955 <nil> <nil>}
	I1002 00:34:30.223488 1957595 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 00:34:30.224138 1957595 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56044->127.0.0.1:34955: read: connection reset by peer
	I1002 00:34:33.364046 1957595 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-920941
	
	I1002 00:34:33.364075 1957595 ubuntu.go:169] provisioning hostname "old-k8s-version-920941"
	I1002 00:34:33.364145 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:33.385514 1957595 main.go:141] libmachine: Using SSH client type: native
	I1002 00:34:33.385769 1957595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34955 <nil> <nil>}
	I1002 00:34:33.385781 1957595 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-920941 && echo "old-k8s-version-920941" | sudo tee /etc/hostname
	I1002 00:34:33.540971 1957595 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-920941
	
	I1002 00:34:33.541066 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:33.566600 1957595 main.go:141] libmachine: Using SSH client type: native
	I1002 00:34:33.566897 1957595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34955 <nil> <nil>}
	I1002 00:34:33.566924 1957595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-920941' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-920941/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-920941' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:34:33.712533 1957595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:34:33.712567 1957595 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19740-1745120/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-1745120/.minikube}
	I1002 00:34:33.712609 1957595 ubuntu.go:177] setting up certificates
	I1002 00:34:33.712619 1957595 provision.go:84] configureAuth start
	I1002 00:34:33.712690 1957595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-920941
	I1002 00:34:33.738146 1957595 provision.go:143] copyHostCerts
	I1002 00:34:33.738217 1957595 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.pem, removing ...
	I1002 00:34:33.738240 1957595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.pem
	I1002 00:34:33.738299 1957595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.pem (1082 bytes)
	I1002 00:34:33.738408 1957595 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-1745120/.minikube/cert.pem, removing ...
	I1002 00:34:33.738419 1957595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-1745120/.minikube/cert.pem
	I1002 00:34:33.738442 1957595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-1745120/.minikube/cert.pem (1123 bytes)
	I1002 00:34:33.738511 1957595 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-1745120/.minikube/key.pem, removing ...
	I1002 00:34:33.738521 1957595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-1745120/.minikube/key.pem
	I1002 00:34:33.738543 1957595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-1745120/.minikube/key.pem (1675 bytes)
	I1002 00:34:33.738639 1957595 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-920941 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-920941]
	I1002 00:34:34.039519 1957595 provision.go:177] copyRemoteCerts
	I1002 00:34:34.039613 1957595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:34:34.039674 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:34.062126 1957595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34955 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/old-k8s-version-920941/id_rsa Username:docker}
	I1002 00:34:34.166187 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 00:34:34.195602 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 00:34:34.224423 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 00:34:34.263381 1957595 provision.go:87] duration metric: took 550.740825ms to configureAuth
	I1002 00:34:34.263405 1957595 ubuntu.go:193] setting minikube options for container-runtime
	I1002 00:34:34.263599 1957595 config.go:182] Loaded profile config "old-k8s-version-920941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1002 00:34:34.263607 1957595 machine.go:96] duration metric: took 4.085637035s to provisionDockerMachine
	I1002 00:34:34.263614 1957595 start.go:293] postStartSetup for "old-k8s-version-920941" (driver="docker")
	I1002 00:34:34.263625 1957595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 00:34:34.263680 1957595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 00:34:34.263724 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:34.282285 1957595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34955 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/old-k8s-version-920941/id_rsa Username:docker}
	I1002 00:34:34.379269 1957595 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 00:34:34.382715 1957595 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 00:34:34.382751 1957595 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 00:34:34.382815 1957595 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 00:34:34.382832 1957595 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1002 00:34:34.382843 1957595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-1745120/.minikube/addons for local assets ...
	I1002 00:34:34.382902 1957595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-1745120/.minikube/files for local assets ...
	I1002 00:34:34.383016 1957595 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-1745120/.minikube/files/etc/ssl/certs/17505052.pem -> 17505052.pem in /etc/ssl/certs
	I1002 00:34:34.383184 1957595 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 00:34:34.402249 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/files/etc/ssl/certs/17505052.pem --> /etc/ssl/certs/17505052.pem (1708 bytes)
	I1002 00:34:34.437010 1957595 start.go:296] duration metric: took 173.380925ms for postStartSetup
	I1002 00:34:34.437154 1957595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:34:34.437221 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:34.459043 1957595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34955 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/old-k8s-version-920941/id_rsa Username:docker}
	I1002 00:34:34.553851 1957595 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 00:34:34.558437 1957595 fix.go:56] duration metric: took 4.897700305s for fixHost
	I1002 00:34:34.558502 1957595 start.go:83] releasing machines lock for "old-k8s-version-920941", held for 4.89779161s
	I1002 00:34:34.558584 1957595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-920941
	I1002 00:34:34.575631 1957595 ssh_runner.go:195] Run: cat /version.json
	I1002 00:34:34.575687 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:34.575973 1957595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 00:34:34.576044 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:34.601034 1957595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34955 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/old-k8s-version-920941/id_rsa Username:docker}
	I1002 00:34:34.610006 1957595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34955 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/old-k8s-version-920941/id_rsa Username:docker}
	I1002 00:34:34.840401 1957595 ssh_runner.go:195] Run: systemctl --version
	I1002 00:34:34.846127 1957595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 00:34:34.851726 1957595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1002 00:34:34.887514 1957595 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1002 00:34:34.887620 1957595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 00:34:34.897640 1957595 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 00:34:34.897669 1957595 start.go:495] detecting cgroup driver to use...
	I1002 00:34:34.897727 1957595 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 00:34:34.897809 1957595 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 00:34:34.913896 1957595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 00:34:34.933071 1957595 docker.go:217] disabling cri-docker service (if available) ...
	I1002 00:34:34.933167 1957595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 00:34:34.951017 1957595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 00:34:34.965010 1957595 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 00:34:35.066549 1957595 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 00:34:35.176072 1957595 docker.go:233] disabling docker service ...
	I1002 00:34:35.176161 1957595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 00:34:35.190492 1957595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 00:34:35.203417 1957595 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 00:34:35.315610 1957595 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 00:34:35.416316 1957595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 00:34:35.430982 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 00:34:35.450942 1957595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1002 00:34:35.462343 1957595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 00:34:35.473511 1957595 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 00:34:35.473629 1957595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 00:34:35.485404 1957595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 00:34:35.495288 1957595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 00:34:35.514007 1957595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 00:34:35.530074 1957595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 00:34:35.540436 1957595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 00:34:35.550822 1957595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 00:34:35.560282 1957595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 00:34:35.569782 1957595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:34:35.692954 1957595 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 00:34:35.910725 1957595 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 00:34:35.910796 1957595 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 00:34:35.915487 1957595 start.go:563] Will wait 60s for crictl version
	I1002 00:34:35.915542 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:34:35.921190 1957595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 00:34:35.960952 1957595 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1002 00:34:35.961023 1957595 ssh_runner.go:195] Run: containerd --version
	I1002 00:34:35.987667 1957595 ssh_runner.go:195] Run: containerd --version
	I1002 00:34:36.014002 1957595 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1002 00:34:36.015998 1957595 cli_runner.go:164] Run: docker network inspect old-k8s-version-920941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 00:34:36.030564 1957595 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 00:34:36.034288 1957595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:34:36.045315 1957595 kubeadm.go:883] updating cluster {Name:old-k8s-version-920941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-920941 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 00:34:36.045435 1957595 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1002 00:34:36.045512 1957595 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:34:36.094420 1957595 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 00:34:36.094445 1957595 containerd.go:534] Images already preloaded, skipping extraction
	I1002 00:34:36.094506 1957595 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:34:36.132023 1957595 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 00:34:36.132048 1957595 cache_images.go:84] Images are preloaded, skipping loading
	I1002 00:34:36.132056 1957595 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I1002 00:34:36.132178 1957595 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-920941 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-920941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 00:34:36.132252 1957595 ssh_runner.go:195] Run: sudo crictl info
	I1002 00:34:36.171005 1957595 cni.go:84] Creating CNI manager for ""
	I1002 00:34:36.171028 1957595 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 00:34:36.171039 1957595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 00:34:36.171059 1957595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-920941 NodeName:old-k8s-version-920941 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 00:34:36.171185 1957595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-920941"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 00:34:36.171258 1957595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1002 00:34:36.180769 1957595 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 00:34:36.180844 1957595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 00:34:36.189910 1957595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1002 00:34:36.208616 1957595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 00:34:36.227094 1957595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1002 00:34:36.245376 1957595 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 00:34:36.248927 1957595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:34:36.259473 1957595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:34:36.352186 1957595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:34:36.370496 1957595 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941 for IP: 192.168.76.2
	I1002 00:34:36.370556 1957595 certs.go:194] generating shared ca certs ...
	I1002 00:34:36.370586 1957595 certs.go:226] acquiring lock for ca certs: {Name:mkeb93c689dc39169cb991acba6d63d702f9e0e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:34:36.370746 1957595 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.key
	I1002 00:34:36.370816 1957595 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.key
	I1002 00:34:36.370840 1957595 certs.go:256] generating profile certs ...
	I1002 00:34:36.370958 1957595 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.key
	I1002 00:34:36.371056 1957595 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/apiserver.key.f2b253d6
	I1002 00:34:36.371135 1957595 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/proxy-client.key
	I1002 00:34:36.371275 1957595 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/1750505.pem (1338 bytes)
	W1002 00:34:36.371342 1957595 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/1750505_empty.pem, impossibly tiny 0 bytes
	I1002 00:34:36.371371 1957595 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 00:34:36.371419 1957595 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem (1082 bytes)
	I1002 00:34:36.371471 1957595 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/cert.pem (1123 bytes)
	I1002 00:34:36.371529 1957595 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/key.pem (1675 bytes)
	I1002 00:34:36.371600 1957595 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/files/etc/ssl/certs/17505052.pem (1708 bytes)
	I1002 00:34:36.372235 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 00:34:36.436170 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 00:34:36.513182 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 00:34:36.597626 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 00:34:36.641610 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 00:34:36.676897 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 00:34:36.705026 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 00:34:36.741404 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 00:34:36.780857 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 00:34:36.815083 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/1750505.pem --> /usr/share/ca-certificates/1750505.pem (1338 bytes)
	I1002 00:34:36.851373 1957595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/files/etc/ssl/certs/17505052.pem --> /usr/share/ca-certificates/17505052.pem (1708 bytes)
	I1002 00:34:36.893545 1957595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 00:34:36.918779 1957595 ssh_runner.go:195] Run: openssl version
	I1002 00:34:36.926565 1957595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 00:34:36.938372 1957595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:34:36.942921 1957595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 23:43 /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:34:36.943044 1957595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:34:36.953974 1957595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 00:34:36.963312 1957595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1750505.pem && ln -fs /usr/share/ca-certificates/1750505.pem /etc/ssl/certs/1750505.pem"
	I1002 00:34:36.979932 1957595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1750505.pem
	I1002 00:34:36.984317 1957595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:54 /usr/share/ca-certificates/1750505.pem
	I1002 00:34:36.984418 1957595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1750505.pem
	I1002 00:34:36.998156 1957595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1750505.pem /etc/ssl/certs/51391683.0"
	I1002 00:34:37.011536 1957595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17505052.pem && ln -fs /usr/share/ca-certificates/17505052.pem /etc/ssl/certs/17505052.pem"
	I1002 00:34:37.026415 1957595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17505052.pem
	I1002 00:34:37.036159 1957595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:54 /usr/share/ca-certificates/17505052.pem
	I1002 00:34:37.036279 1957595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17505052.pem
	I1002 00:34:37.046397 1957595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17505052.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 00:34:37.058313 1957595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 00:34:37.064493 1957595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 00:34:37.071965 1957595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 00:34:37.081187 1957595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 00:34:37.092821 1957595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 00:34:37.101307 1957595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 00:34:37.109500 1957595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 00:34:37.117225 1957595 kubeadm.go:392] StartCluster: {Name:old-k8s-version-920941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-920941 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:34:37.117372 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 00:34:37.117459 1957595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:34:37.170174 1957595 cri.go:89] found id: "8518536c30aef09017c911aee769571a3ca250e88cc9853905c824c874626cac"
	I1002 00:34:37.170249 1957595 cri.go:89] found id: "1051ac83c993493af1544df235a7ec482b2588f13293c80a3af3112eff956a41"
	I1002 00:34:37.170268 1957595 cri.go:89] found id: "5702ec783ecf1a85e9910ecad76d933e604429ad4ebb11df0f6537272ea91557"
	I1002 00:34:37.170288 1957595 cri.go:89] found id: "7d0046a7ae8533f8fe5ca98c95ba71a7292736d0effab8db2b682672e751ab5f"
	I1002 00:34:37.170313 1957595 cri.go:89] found id: "9882de920e5952a14a2e090a9a310cf476deae60c93e9ecd7f4384e8d6012f04"
	I1002 00:34:37.170331 1957595 cri.go:89] found id: "19bd9ef04d15a6dccf5970eea7159536699fa21c02ae6b578cb94fb5a6c7af6a"
	I1002 00:34:37.170349 1957595 cri.go:89] found id: "11ec001392bd7c1b6e1804233a847099114357802451cb3dfc4f819481c9eac5"
	I1002 00:34:37.170367 1957595 cri.go:89] found id: "8c7992aa581c091fd0ff51c777b72d5212ca433204a6a429359b1bf731b055f4"
	I1002 00:34:37.170385 1957595 cri.go:89] found id: ""
	I1002 00:34:37.170458 1957595 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1002 00:34:37.186709 1957595 cri.go:116] JSON = null
	W1002 00:34:37.186798 1957595 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1002 00:34:37.186882 1957595 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 00:34:37.197993 1957595 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1002 00:34:37.198064 1957595 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1002 00:34:37.198133 1957595 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 00:34:37.208581 1957595 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 00:34:37.209060 1957595 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-920941" does not appear in /home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1002 00:34:37.209257 1957595 kubeconfig.go:62] /home/jenkins/minikube-integration/19740-1745120/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-920941" cluster setting kubeconfig missing "old-k8s-version-920941" context setting]
	I1002 00:34:37.209564 1957595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/kubeconfig: {Name:mk014bd742e0b0f4a72d987c0fd643ed22274647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:34:37.210906 1957595 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 00:34:37.228808 1957595 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 00:34:37.228881 1957595 kubeadm.go:597] duration metric: took 30.795345ms to restartPrimaryControlPlane
	I1002 00:34:37.228906 1957595 kubeadm.go:394] duration metric: took 111.68984ms to StartCluster
	I1002 00:34:37.228941 1957595 settings.go:142] acquiring lock: {Name:mk200f8894606b147c1230e7434ca41f474a2cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:34:37.229015 1957595 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1002 00:34:37.229662 1957595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/kubeconfig: {Name:mk014bd742e0b0f4a72d987c0fd643ed22274647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:34:37.229927 1957595 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 00:34:37.230277 1957595 config.go:182] Loaded profile config "old-k8s-version-920941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1002 00:34:37.230347 1957595 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:34:37.230434 1957595 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-920941"
	I1002 00:34:37.230448 1957595 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-920941"
	W1002 00:34:37.230455 1957595 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:34:37.230484 1957595 host.go:66] Checking if "old-k8s-version-920941" exists ...
	I1002 00:34:37.230500 1957595 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-920941"
	I1002 00:34:37.230536 1957595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-920941"
	I1002 00:34:37.230829 1957595 cli_runner.go:164] Run: docker container inspect old-k8s-version-920941 --format={{.State.Status}}
	I1002 00:34:37.230980 1957595 cli_runner.go:164] Run: docker container inspect old-k8s-version-920941 --format={{.State.Status}}
	I1002 00:34:37.232702 1957595 addons.go:69] Setting dashboard=true in profile "old-k8s-version-920941"
	I1002 00:34:37.232763 1957595 addons.go:234] Setting addon dashboard=true in "old-k8s-version-920941"
	W1002 00:34:37.232787 1957595 addons.go:243] addon dashboard should already be in state true
	I1002 00:34:37.232830 1957595 host.go:66] Checking if "old-k8s-version-920941" exists ...
	I1002 00:34:37.233316 1957595 cli_runner.go:164] Run: docker container inspect old-k8s-version-920941 --format={{.State.Status}}
	I1002 00:34:37.233714 1957595 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-920941"
	I1002 00:34:37.233918 1957595 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-920941"
	W1002 00:34:37.233929 1957595 addons.go:243] addon metrics-server should already be in state true
	I1002 00:34:37.233959 1957595 host.go:66] Checking if "old-k8s-version-920941" exists ...
	I1002 00:34:37.243059 1957595 out.go:177] * Verifying Kubernetes components...
	I1002 00:34:37.243363 1957595 cli_runner.go:164] Run: docker container inspect old-k8s-version-920941 --format={{.State.Status}}
	I1002 00:34:37.245225 1957595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:34:37.295053 1957595 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-920941"
	W1002 00:34:37.295076 1957595 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:34:37.295102 1957595 host.go:66] Checking if "old-k8s-version-920941" exists ...
	I1002 00:34:37.295518 1957595 cli_runner.go:164] Run: docker container inspect old-k8s-version-920941 --format={{.State.Status}}
	I1002 00:34:37.331635 1957595 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:34:37.331636 1957595 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:34:37.334208 1957595 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1002 00:34:37.334277 1957595 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:34:37.334288 1957595 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:34:37.334301 1957595 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:34:37.334315 1957595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:34:37.334348 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:37.334366 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:37.341423 1957595 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 00:34:37.343355 1957595 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 00:34:37.343381 1957595 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 00:34:37.343468 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:37.377186 1957595 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:34:37.377207 1957595 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:34:37.377270 1957595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-920941
	I1002 00:34:37.397875 1957595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34955 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/old-k8s-version-920941/id_rsa Username:docker}
	I1002 00:34:37.415341 1957595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34955 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/old-k8s-version-920941/id_rsa Username:docker}
	I1002 00:34:37.432126 1957595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34955 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/old-k8s-version-920941/id_rsa Username:docker}
	I1002 00:34:37.452656 1957595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34955 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/old-k8s-version-920941/id_rsa Username:docker}
	I1002 00:34:37.481426 1957595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:34:37.530608 1957595 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-920941" to be "Ready" ...
	I1002 00:34:37.621757 1957595 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:34:37.621785 1957595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:34:37.689304 1957595 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:34:37.689371 1957595 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:34:37.695112 1957595 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 00:34:37.695174 1957595 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 00:34:37.721281 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:34:37.758582 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:34:37.774155 1957595 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:34:37.774291 1957595 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:34:37.787071 1957595 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 00:34:37.787147 1957595 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 00:34:37.886456 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:34:37.896487 1957595 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 00:34:37.896564 1957595 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 00:34:38.001379 1957595 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 00:34:38.001460 1957595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 00:34:38.148899 1957595 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 00:34:38.148973 1957595 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1002 00:34:38.157923 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:38.158001 1957595 retry.go:31] will retry after 311.67596ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1002 00:34:38.158074 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:38.158101 1957595 retry.go:31] will retry after 352.540311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:38.198989 1957595 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 00:34:38.199065 1957595 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W1002 00:34:38.264384 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:38.264418 1957595 retry.go:31] will retry after 261.658089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:38.291511 1957595 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 00:34:38.291536 1957595 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 00:34:38.324376 1957595 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 00:34:38.324396 1957595 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 00:34:38.358075 1957595 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:34:38.358096 1957595 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 00:34:38.403909 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:34:38.470952 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:34:38.511619 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:34:38.527147 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1002 00:34:38.872968 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:38.873010 1957595 retry.go:31] will retry after 233.133749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1002 00:34:38.924183 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:38.924212 1957595 retry.go:31] will retry after 420.041322ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1002 00:34:38.924257 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:38.924264 1957595 retry.go:31] will retry after 538.48756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1002 00:34:38.924300 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:38.924306 1957595 retry.go:31] will retry after 311.145767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.107162 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1002 00:34:39.224188 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.224217 1957595 retry.go:31] will retry after 446.602758ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.235586 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:34:39.345117 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 00:34:39.388135 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.388164 1957595 retry.go:31] will retry after 570.521176ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.463442 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 00:34:39.472707 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.472738 1957595 retry.go:31] will retry after 297.084492ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.531222 1957595 node_ready.go:53] error getting node "old-k8s-version-920941": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-920941": dial tcp 192.168.76.2:8443: connect: connection refused
	W1002 00:34:39.562743 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.562787 1957595 retry.go:31] will retry after 551.113231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.671191 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1002 00:34:39.753161 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.753193 1957595 retry.go:31] will retry after 780.514599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.770325 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 00:34:39.848416 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.848444 1957595 retry.go:31] will retry after 549.573525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:39.959851 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1002 00:34:40.068765 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:40.068798 1957595 retry.go:31] will retry after 1.208291191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:40.115062 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 00:34:40.212072 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:40.212119 1957595 retry.go:31] will retry after 486.853729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:40.398533 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 00:34:40.479185 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:40.479217 1957595 retry.go:31] will retry after 1.685783426s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:40.533932 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1002 00:34:40.617648 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:40.617730 1957595 retry.go:31] will retry after 1.218678114s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:40.699931 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 00:34:40.794537 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:40.794564 1957595 retry.go:31] will retry after 1.448244116s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:41.277663 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1002 00:34:41.374150 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:41.374183 1957595 retry.go:31] will retry after 1.019261294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:41.531819 1957595 node_ready.go:53] error getting node "old-k8s-version-920941": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-920941": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 00:34:41.837063 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1002 00:34:41.930907 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:41.930943 1957595 retry.go:31] will retry after 1.43325621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:42.165780 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:34:42.243141 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 00:34:42.253567 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:42.253603 1957595 retry.go:31] will retry after 1.352829079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1002 00:34:42.350375 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:42.350410 1957595 retry.go:31] will retry after 1.704102754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:42.393639 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1002 00:34:42.493791 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:42.493825 1957595 retry.go:31] will retry after 1.047605072s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:43.364667 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1002 00:34:43.459844 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:43.459877 1957595 retry.go:31] will retry after 2.107033014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:43.541626 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:34:43.607009 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 00:34:43.644539 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:43.644567 1957595 retry.go:31] will retry after 1.676537137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1002 00:34:43.738297 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:43.738325 1957595 retry.go:31] will retry after 3.790318088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:44.032110 1957595 node_ready.go:53] error getting node "old-k8s-version-920941": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-920941": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 00:34:44.055421 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 00:34:44.143418 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:44.143445 1957595 retry.go:31] will retry after 4.011530551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:45.321693 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1002 00:34:45.410121 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:45.410153 1957595 retry.go:31] will retry after 4.257310115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:45.568054 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1002 00:34:45.805251 1957595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:45.805280 1957595 retry.go:31] will retry after 2.684361463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1002 00:34:46.532028 1957595 node_ready.go:53] error getting node "old-k8s-version-920941": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-920941": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 00:34:47.528897 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:34:48.155956 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:34:48.490543 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:34:49.668032 1957595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:34:57.032664 1957595 node_ready.go:53] error getting node "old-k8s-version-920941": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-920941": net/http: TLS handshake timeout
	I1002 00:34:58.705029 1957595 node_ready.go:49] node "old-k8s-version-920941" has status "Ready":"True"
	I1002 00:34:58.705052 1957595 node_ready.go:38] duration metric: took 21.174360824s for node "old-k8s-version-920941" to be "Ready" ...
	I1002 00:34:58.705078 1957595 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:34:58.956117 1957595 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-nbdkx" in "kube-system" namespace to be "Ready" ...
	I1002 00:34:59.142654 1957595 pod_ready.go:93] pod "coredns-74ff55c5b-nbdkx" in "kube-system" namespace has status "Ready":"True"
	I1002 00:34:59.142731 1957595 pod_ready.go:82] duration metric: took 186.504716ms for pod "coredns-74ff55c5b-nbdkx" in "kube-system" namespace to be "Ready" ...
	I1002 00:34:59.142767 1957595 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-920941" in "kube-system" namespace to be "Ready" ...
	I1002 00:34:59.202790 1957595 pod_ready.go:93] pod "etcd-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"True"
	I1002 00:34:59.202867 1957595 pod_ready.go:82] duration metric: took 60.062033ms for pod "etcd-old-k8s-version-920941" in "kube-system" namespace to be "Ready" ...
	I1002 00:34:59.202897 1957595 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-920941" in "kube-system" namespace to be "Ready" ...
	I1002 00:35:00.178362 1957595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.649420585s)
	I1002 00:35:00.178428 1957595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (12.022453265s)
	I1002 00:35:01.238815 1957595 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:01.465631 1957595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.797558811s)
	I1002 00:35:01.465709 1957595 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-920941"
	I1002 00:35:01.465792 1957595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.975200431s)
	I1002 00:35:01.467435 1957595 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-920941 addons enable metrics-server
	
	I1002 00:35:01.469163 1957595 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1002 00:35:01.471027 1957595 addons.go:510] duration metric: took 24.2406829s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1002 00:35:03.710240 1957595 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:06.209554 1957595 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:08.212366 1957595 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:10.210749 1957595 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"True"
	I1002 00:35:10.210843 1957595 pod_ready.go:82] duration metric: took 11.007913487s for pod "kube-apiserver-old-k8s-version-920941" in "kube-system" namespace to be "Ready" ...
	I1002 00:35:10.210869 1957595 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace to be "Ready" ...
	I1002 00:35:12.217991 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:14.717670 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:17.216889 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:19.217566 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:21.722504 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:24.218636 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:26.720086 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:29.216755 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:31.217548 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:33.719149 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:36.216336 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:38.216639 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:40.217681 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:42.717978 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:44.718174 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:47.217331 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:49.718351 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:52.217857 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:54.717353 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:57.217769 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:35:59.716913 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:01.717482 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:03.718388 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:06.217010 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:08.775943 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:11.217838 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:13.217946 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:15.219050 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:17.223403 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:19.717704 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:21.216971 1957595 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:21.216996 1957595 pod_ready.go:82] duration metric: took 1m11.006105841s for pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:21.217007 1957595 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-42b7q" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:21.221776 1957595 pod_ready.go:93] pod "kube-proxy-42b7q" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:21.221801 1957595 pod_ready.go:82] duration metric: took 4.785683ms for pod "kube-proxy-42b7q" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:21.221814 1957595 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-920941" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:23.227502 1957595 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:25.228486 1957595 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:26.228584 1957595 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:26.228662 1957595 pod_ready.go:82] duration metric: took 5.006839188s for pod "kube-scheduler-old-k8s-version-920941" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:26.228688 1957595 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:28.234816 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:30.236092 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:32.736345 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:35.234991 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:37.235434 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:39.236103 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:41.236546 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:43.735666 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:46.235955 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:48.734957 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:50.735615 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:52.735648 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:55.236733 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:57.239178 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:59.734959 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:01.735219 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:04.237345 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:06.735087 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:08.735842 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:11.235348 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:13.735763 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:16.234810 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:18.235343 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:20.735050 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:23.235940 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:25.735045 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:28.234568 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:30.235503 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:32.735295 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:35.235317 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:37.734651 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:40.235325 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:42.235666 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:44.735462 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:47.234819 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:49.235228 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:51.734924 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:53.735129 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:56.234708 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:58.735319 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:01.234117 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:03.235524 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:05.735867 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:08.235269 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:10.235566 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:12.735472 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:14.737310 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:17.234735 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:19.735217 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:21.735317 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:24.235157 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:26.734605 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:28.734860 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:30.735008 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:33.234148 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:35.234909 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:37.735168 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:39.735540 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:41.735835 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:44.235027 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:46.734386 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:48.735126 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:50.736039 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:53.234472 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:55.234593 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:57.235198 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:59.235364 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:01.235569 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:03.735110 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:06.235324 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:08.734702 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:10.736790 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:13.235130 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:15.235733 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:17.734764 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:19.734977 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:21.735890 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:24.235463 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:26.235565 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:28.735069 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:30.735329 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:32.736129 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:35.235603 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:37.734912 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:39.736246 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:42.235540 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:44.735190 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:47.236303 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:49.736213 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:52.235593 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:54.735856 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:57.235708 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:59.735217 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:02.235341 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:04.235464 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:06.752875 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:09.235497 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:11.739045 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:14.234873 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:16.734971 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:18.735025 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:21.234561 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:23.735307 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:25.735630 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:26.235407 1957595 pod_ready.go:82] duration metric: took 4m0.006692573s for pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace to be "Ready" ...
	E1002 00:40:26.235429 1957595 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:40:26.235438 1957595 pod_ready.go:39] duration metric: took 5m27.530349402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:40:26.235452 1957595 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:40:26.235482 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:40:26.235571 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:40:26.281727 1957595 cri.go:89] found id: "0a25b599263d779d6664295d455c78ad67aae1ea393696c88eb6887fedc9bf29"
	I1002 00:40:26.281798 1957595 cri.go:89] found id: "11ec001392bd7c1b6e1804233a847099114357802451cb3dfc4f819481c9eac5"
	I1002 00:40:26.281808 1957595 cri.go:89] found id: ""
	I1002 00:40:26.281817 1957595 logs.go:282] 2 containers: [0a25b599263d779d6664295d455c78ad67aae1ea393696c88eb6887fedc9bf29 11ec001392bd7c1b6e1804233a847099114357802451cb3dfc4f819481c9eac5]
	I1002 00:40:26.281883 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.285748 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.289204 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1002 00:40:26.289276 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:40:26.330580 1957595 cri.go:89] found id: "ab6cea850151c0b10687176375d47fb249199d273efd44a9c5d6a1d39e3f623a"
	I1002 00:40:26.330601 1957595 cri.go:89] found id: "8c7992aa581c091fd0ff51c777b72d5212ca433204a6a429359b1bf731b055f4"
	I1002 00:40:26.330606 1957595 cri.go:89] found id: ""
	I1002 00:40:26.330613 1957595 logs.go:282] 2 containers: [ab6cea850151c0b10687176375d47fb249199d273efd44a9c5d6a1d39e3f623a 8c7992aa581c091fd0ff51c777b72d5212ca433204a6a429359b1bf731b055f4]
	I1002 00:40:26.330687 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.334426 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.338199 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1002 00:40:26.338294 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:40:26.383276 1957595 cri.go:89] found id: "9f4ef323c339f7d0dd395ca641e6b83c6ee625242e1e405bcb230f0280f833f9"
	I1002 00:40:26.383301 1957595 cri.go:89] found id: "8518536c30aef09017c911aee769571a3ca250e88cc9853905c824c874626cac"
	I1002 00:40:26.383306 1957595 cri.go:89] found id: ""
	I1002 00:40:26.383323 1957595 logs.go:282] 2 containers: [9f4ef323c339f7d0dd395ca641e6b83c6ee625242e1e405bcb230f0280f833f9 8518536c30aef09017c911aee769571a3ca250e88cc9853905c824c874626cac]
	I1002 00:40:26.383404 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.387099 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.390623 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:40:26.390695 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:40:26.427645 1957595 cri.go:89] found id: "87d3a4aa9e474d9d563b52794771a880f456688ecc28b5c9391254d7e388d601"
	I1002 00:40:26.427672 1957595 cri.go:89] found id: "19bd9ef04d15a6dccf5970eea7159536699fa21c02ae6b578cb94fb5a6c7af6a"
	I1002 00:40:26.427683 1957595 cri.go:89] found id: ""
	I1002 00:40:26.427691 1957595 logs.go:282] 2 containers: [87d3a4aa9e474d9d563b52794771a880f456688ecc28b5c9391254d7e388d601 19bd9ef04d15a6dccf5970eea7159536699fa21c02ae6b578cb94fb5a6c7af6a]
	I1002 00:40:26.427749 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.431220 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.434462 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:40:26.434533 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:40:26.474025 1957595 cri.go:89] found id: "8ff3cc680606db52e7aec936aa5a3da89b1f05ec9c72362a02657eefc93740a6"
	I1002 00:40:26.474046 1957595 cri.go:89] found id: "7d0046a7ae8533f8fe5ca98c95ba71a7292736d0effab8db2b682672e751ab5f"
	I1002 00:40:26.474051 1957595 cri.go:89] found id: ""
	I1002 00:40:26.474058 1957595 logs.go:282] 2 containers: [8ff3cc680606db52e7aec936aa5a3da89b1f05ec9c72362a02657eefc93740a6 7d0046a7ae8533f8fe5ca98c95ba71a7292736d0effab8db2b682672e751ab5f]
	I1002 00:40:26.474118 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.478508 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.482063 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:40:26.482131 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:40:26.529778 1957595 cri.go:89] found id: "ad357c0209d969730c7b744d5c2fbad0a7c610d618ddddb9a2fc771ce023b605"
	I1002 00:40:26.529803 1957595 cri.go:89] found id: "9882de920e5952a14a2e090a9a310cf476deae60c93e9ecd7f4384e8d6012f04"
	I1002 00:40:26.529809 1957595 cri.go:89] found id: ""
	I1002 00:40:26.529817 1957595 logs.go:282] 2 containers: [ad357c0209d969730c7b744d5c2fbad0a7c610d618ddddb9a2fc771ce023b605 9882de920e5952a14a2e090a9a310cf476deae60c93e9ecd7f4384e8d6012f04]
	I1002 00:40:26.529877 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.533446 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.536980 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1002 00:40:26.537109 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:40:26.578079 1957595 cri.go:89] found id: "a767e421df83a00b32383377cc55e4151e7bd6f66d1066e2c265fd0d78172cac"
	I1002 00:40:26.578103 1957595 cri.go:89] found id: "1051ac83c993493af1544df235a7ec482b2588f13293c80a3af3112eff956a41"
	I1002 00:40:26.578108 1957595 cri.go:89] found id: ""
	I1002 00:40:26.578116 1957595 logs.go:282] 2 containers: [a767e421df83a00b32383377cc55e4151e7bd6f66d1066e2c265fd0d78172cac 1051ac83c993493af1544df235a7ec482b2588f13293c80a3af3112eff956a41]
	I1002 00:40:26.578172 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.581929 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.585716 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:40:26.585837 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:40:26.634661 1957595 cri.go:89] found id: "287dac3a47db438435ee5f4915263ed26e3f6eae852bb618365b6490d95b4453"
	I1002 00:40:26.634692 1957595 cri.go:89] found id: "4a7d63ad64058ab8a816f95771113a6646e7f49235241d0b9c99ee631cdf7f38"
	I1002 00:40:26.634697 1957595 cri.go:89] found id: ""
	I1002 00:40:26.634704 1957595 logs.go:282] 2 containers: [287dac3a47db438435ee5f4915263ed26e3f6eae852bb618365b6490d95b4453 4a7d63ad64058ab8a816f95771113a6646e7f49235241d0b9c99ee631cdf7f38]
	I1002 00:40:26.634769 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.638476 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.641979 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1002 00:40:26.642082 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1002 00:40:26.684232 1957595 cri.go:89] found id: "6d607cf4c97f9850c9226c3b461edd448574512ffe663325bf684f333c1bef50"
	I1002 00:40:26.684295 1957595 cri.go:89] found id: ""
	I1002 00:40:26.684318 1957595 logs.go:282] 1 containers: [6d607cf4c97f9850c9226c3b461edd448574512ffe663325bf684f333c1bef50]
	I1002 00:40:26.684407 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.688115 1957595 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:40:26.688142 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:40:26.840427 1957595 logs.go:123] Gathering logs for kube-apiserver [0a25b599263d779d6664295d455c78ad67aae1ea393696c88eb6887fedc9bf29] ...
	I1002 00:40:26.840617 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a25b599263d779d6664295d455c78ad67aae1ea393696c88eb6887fedc9bf29"
	I1002 00:40:26.907015 1957595 logs.go:123] Gathering logs for etcd [ab6cea850151c0b10687176375d47fb249199d273efd44a9c5d6a1d39e3f623a] ...
	I1002 00:40:26.907065 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab6cea850151c0b10687176375d47fb249199d273efd44a9c5d6a1d39e3f623a"
	I1002 00:40:26.952130 1957595 logs.go:123] Gathering logs for coredns [9f4ef323c339f7d0dd395ca641e6b83c6ee625242e1e405bcb230f0280f833f9] ...
	I1002 00:40:26.952161 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f4ef323c339f7d0dd395ca641e6b83c6ee625242e1e405bcb230f0280f833f9"
	I1002 00:40:26.993101 1957595 logs.go:123] Gathering logs for coredns [8518536c30aef09017c911aee769571a3ca250e88cc9853905c824c874626cac] ...
	I1002 00:40:26.993131 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8518536c30aef09017c911aee769571a3ca250e88cc9853905c824c874626cac"
	I1002 00:40:27.051452 1957595 logs.go:123] Gathering logs for kube-proxy [8ff3cc680606db52e7aec936aa5a3da89b1f05ec9c72362a02657eefc93740a6] ...
	I1002 00:40:27.051531 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ff3cc680606db52e7aec936aa5a3da89b1f05ec9c72362a02657eefc93740a6"
	I1002 00:40:27.101863 1957595 logs.go:123] Gathering logs for kubelet ...
	I1002 00:40:27.101891 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1002 00:40:27.165499 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.403474     657 reflector.go:138] object-"kube-system"/"kube-proxy-token-72v75": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-72v75" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.165767 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.403634     657 reflector.go:138] object-"kube-system"/"kindnet-token-drxz7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-drxz7" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.165977 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.403804     657 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.166190 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.403951     657 reflector.go:138] object-"kube-system"/"coredns-token-p2g4h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-p2g4h" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.166401 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.406263     657 reflector.go:138] object-"default"/"default-token-lmpgm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-lmpgm" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.166604 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.406638     657 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.166826 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.406794     657 reflector.go:138] object-"kube-system"/"metrics-server-token-x2fml": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-x2fml" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.167052 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.406955     657 reflector.go:138] object-"kube-system"/"storage-provisioner-token-2v6pp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-2v6pp" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.175955 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:02 old-k8s-version-920941 kubelet[657]: E1002 00:35:02.816096     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1002 00:40:27.176152 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:03 old-k8s-version-920941 kubelet[657]: E1002 00:35:03.404518     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.179250 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:17 old-k8s-version-920941 kubelet[657]: E1002 00:35:17.054839     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1002 00:40:27.180905 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:24 old-k8s-version-920941 kubelet[657]: E1002 00:35:24.463670     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.181701 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:25 old-k8s-version-920941 kubelet[657]: E1002 00:35:25.467856     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.181887 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:28 old-k8s-version-920941 kubelet[657]: E1002 00:35:28.036655     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.182213 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:32 old-k8s-version-920941 kubelet[657]: E1002 00:35:32.484516     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.182651 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:32 old-k8s-version-920941 kubelet[657]: E1002 00:35:32.507888     657 pod_workers.go:191] Error syncing pod 4cf65c92-656f-422a-952b-05891b36cb68 ("storage-provisioner_kube-system(4cf65c92-656f-422a-952b-05891b36cb68)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4cf65c92-656f-422a-952b-05891b36cb68)"
	W1002 00:40:27.185453 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:39 old-k8s-version-920941 kubelet[657]: E1002 00:35:39.045417     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1002 00:40:27.186187 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:46 old-k8s-version-920941 kubelet[657]: E1002 00:35:46.553569     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.186376 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:50 old-k8s-version-920941 kubelet[657]: E1002 00:35:50.037065     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.186705 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:52 old-k8s-version-920941 kubelet[657]: E1002 00:35:52.484920     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.186890 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:01 old-k8s-version-920941 kubelet[657]: E1002 00:36:01.036920     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.187229 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:05 old-k8s-version-920941 kubelet[657]: E1002 00:36:05.038238     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.187413 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:14 old-k8s-version-920941 kubelet[657]: E1002 00:36:14.037429     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.187999 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:17 old-k8s-version-920941 kubelet[657]: E1002 00:36:17.639596     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.188326 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:22 old-k8s-version-920941 kubelet[657]: E1002 00:36:22.485499     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.190840 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:25 old-k8s-version-920941 kubelet[657]: E1002 00:36:25.067982     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1002 00:40:27.191173 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:36 old-k8s-version-920941 kubelet[657]: E1002 00:36:36.036206     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.191359 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:37 old-k8s-version-920941 kubelet[657]: E1002 00:36:37.038810     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.191686 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:49 old-k8s-version-920941 kubelet[657]: E1002 00:36:49.045805     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.191870 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:50 old-k8s-version-920941 kubelet[657]: E1002 00:36:50.037091     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.192053 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:03 old-k8s-version-920941 kubelet[657]: E1002 00:37:03.040054     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.192644 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:04 old-k8s-version-920941 kubelet[657]: E1002 00:37:04.765096     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.192971 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:12 old-k8s-version-920941 kubelet[657]: E1002 00:37:12.484569     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.193163 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:17 old-k8s-version-920941 kubelet[657]: E1002 00:37:17.036843     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.193488 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:27 old-k8s-version-920941 kubelet[657]: E1002 00:37:27.037002     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.193671 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:28 old-k8s-version-920941 kubelet[657]: E1002 00:37:28.036803     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.193857 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:40 old-k8s-version-920941 kubelet[657]: E1002 00:37:40.036708     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.194183 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:42 old-k8s-version-920941 kubelet[657]: E1002 00:37:42.036260     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.194508 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:54 old-k8s-version-920941 kubelet[657]: E1002 00:37:54.036836     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.196937 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:54 old-k8s-version-920941 kubelet[657]: E1002 00:37:54.044847     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1002 00:40:27.197264 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:05 old-k8s-version-920941 kubelet[657]: E1002 00:38:05.036582     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.197450 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:07 old-k8s-version-920941 kubelet[657]: E1002 00:38:07.040872     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.197779 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:17 old-k8s-version-920941 kubelet[657]: E1002 00:38:17.036860     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.197964 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:21 old-k8s-version-920941 kubelet[657]: E1002 00:38:21.036821     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.198554 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:29 old-k8s-version-920941 kubelet[657]: E1002 00:38:29.977771     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.198882 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:32 old-k8s-version-920941 kubelet[657]: E1002 00:38:32.484422     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.199066 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:36 old-k8s-version-920941 kubelet[657]: E1002 00:38:36.036591     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.199394 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:46 old-k8s-version-920941 kubelet[657]: E1002 00:38:46.036189     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.199582 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:50 old-k8s-version-920941 kubelet[657]: E1002 00:38:50.036961     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.199907 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:59 old-k8s-version-920941 kubelet[657]: E1002 00:38:59.040242     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.200091 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:04 old-k8s-version-920941 kubelet[657]: E1002 00:39:04.036603     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.200416 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:11 old-k8s-version-920941 kubelet[657]: E1002 00:39:11.036751     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.200607 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:17 old-k8s-version-920941 kubelet[657]: E1002 00:39:17.036726     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.200935 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:26 old-k8s-version-920941 kubelet[657]: E1002 00:39:26.036140     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.201122 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:28 old-k8s-version-920941 kubelet[657]: E1002 00:39:28.036705     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.201447 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:37 old-k8s-version-920941 kubelet[657]: E1002 00:39:37.039880     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.201635 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:42 old-k8s-version-920941 kubelet[657]: E1002 00:39:42.037320     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.201962 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:48 old-k8s-version-920941 kubelet[657]: E1002 00:39:48.036281     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.202147 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:55 old-k8s-version-920941 kubelet[657]: E1002 00:39:55.037598     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.202508 1957595 logs.go:138] Found kubelet problem: Oct 02 00:40:02 old-k8s-version-920941 kubelet[657]: E1002 00:40:02.036402     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.202694 1957595 logs.go:138] Found kubelet problem: Oct 02 00:40:08 old-k8s-version-920941 kubelet[657]: E1002 00:40:08.036742     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.203020 1957595 logs.go:138] Found kubelet problem: Oct 02 00:40:15 old-k8s-version-920941 kubelet[657]: E1002 00:40:15.037929     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.203205 1957595 logs.go:138] Found kubelet problem: Oct 02 00:40:20 old-k8s-version-920941 kubelet[657]: E1002 00:40:20.036677     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.203530 1957595 logs.go:138] Found kubelet problem: Oct 02 00:40:27 old-k8s-version-920941 kubelet[657]: E1002 00:40:27.036953     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	I1002 00:40:27.203544 1957595 logs.go:123] Gathering logs for kube-scheduler [19bd9ef04d15a6dccf5970eea7159536699fa21c02ae6b578cb94fb5a6c7af6a] ...
	I1002 00:40:27.203559 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19bd9ef04d15a6dccf5970eea7159536699fa21c02ae6b578cb94fb5a6c7af6a"
	I1002 00:40:27.270245 1957595 logs.go:123] Gathering logs for kindnet [a767e421df83a00b32383377cc55e4151e7bd6f66d1066e2c265fd0d78172cac] ...
	I1002 00:40:27.270278 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a767e421df83a00b32383377cc55e4151e7bd6f66d1066e2c265fd0d78172cac"
	I1002 00:40:27.330354 1957595 logs.go:123] Gathering logs for kubernetes-dashboard [6d607cf4c97f9850c9226c3b461edd448574512ffe663325bf684f333c1bef50] ...
	I1002 00:40:27.330384 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d607cf4c97f9850c9226c3b461edd448574512ffe663325bf684f333c1bef50"
	I1002 00:40:27.377052 1957595 logs.go:123] Gathering logs for dmesg ...
	I1002 00:40:27.377080 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:40:27.396038 1957595 logs.go:123] Gathering logs for kube-scheduler [87d3a4aa9e474d9d563b52794771a880f456688ecc28b5c9391254d7e388d601] ...
	I1002 00:40:27.396069 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87d3a4aa9e474d9d563b52794771a880f456688ecc28b5c9391254d7e388d601"
	I1002 00:40:27.434135 1957595 logs.go:123] Gathering logs for kube-controller-manager [ad357c0209d969730c7b744d5c2fbad0a7c610d618ddddb9a2fc771ce023b605] ...
	I1002 00:40:27.434161 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad357c0209d969730c7b744d5c2fbad0a7c610d618ddddb9a2fc771ce023b605"
	I1002 00:40:27.489634 1957595 logs.go:123] Gathering logs for kube-controller-manager [9882de920e5952a14a2e090a9a310cf476deae60c93e9ecd7f4384e8d6012f04] ...
	I1002 00:40:27.489667 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9882de920e5952a14a2e090a9a310cf476deae60c93e9ecd7f4384e8d6012f04"
	I1002 00:40:27.551020 1957595 logs.go:123] Gathering logs for containerd ...
	I1002 00:40:27.551058 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1002 00:40:27.615278 1957595 logs.go:123] Gathering logs for kube-apiserver [11ec001392bd7c1b6e1804233a847099114357802451cb3dfc4f819481c9eac5] ...
	I1002 00:40:27.615315 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11ec001392bd7c1b6e1804233a847099114357802451cb3dfc4f819481c9eac5"
	I1002 00:40:27.699876 1957595 logs.go:123] Gathering logs for etcd [8c7992aa581c091fd0ff51c777b72d5212ca433204a6a429359b1bf731b055f4] ...
	I1002 00:40:27.699910 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c7992aa581c091fd0ff51c777b72d5212ca433204a6a429359b1bf731b055f4"
	I1002 00:40:27.745907 1957595 logs.go:123] Gathering logs for kube-proxy [7d0046a7ae8533f8fe5ca98c95ba71a7292736d0effab8db2b682672e751ab5f] ...
	I1002 00:40:27.745938 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d0046a7ae8533f8fe5ca98c95ba71a7292736d0effab8db2b682672e751ab5f"
	I1002 00:40:27.790931 1957595 logs.go:123] Gathering logs for kindnet [1051ac83c993493af1544df235a7ec482b2588f13293c80a3af3112eff956a41] ...
	I1002 00:40:27.790957 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1051ac83c993493af1544df235a7ec482b2588f13293c80a3af3112eff956a41"
	I1002 00:40:27.841399 1957595 logs.go:123] Gathering logs for storage-provisioner [287dac3a47db438435ee5f4915263ed26e3f6eae852bb618365b6490d95b4453] ...
	I1002 00:40:27.841438 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 287dac3a47db438435ee5f4915263ed26e3f6eae852bb618365b6490d95b4453"
	I1002 00:40:27.882342 1957595 logs.go:123] Gathering logs for storage-provisioner [4a7d63ad64058ab8a816f95771113a6646e7f49235241d0b9c99ee631cdf7f38] ...
	I1002 00:40:27.882368 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a7d63ad64058ab8a816f95771113a6646e7f49235241d0b9c99ee631cdf7f38"
	I1002 00:40:27.925417 1957595 logs.go:123] Gathering logs for container status ...
	I1002 00:40:27.925443 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:40:27.970197 1957595 out.go:358] Setting ErrFile to fd 2...
	I1002 00:40:27.970228 1957595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1002 00:40:27.970391 1957595 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1002 00:40:27.970409 1957595 out.go:270]   Oct 02 00:40:02 old-k8s-version-920941 kubelet[657]: E1002 00:40:02.036402     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	  Oct 02 00:40:02 old-k8s-version-920941 kubelet[657]: E1002 00:40:02.036402     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.970430 1957595 out.go:270]   Oct 02 00:40:08 old-k8s-version-920941 kubelet[657]: E1002 00:40:08.036742     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 02 00:40:08 old-k8s-version-920941 kubelet[657]: E1002 00:40:08.036742     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.970447 1957595 out.go:270]   Oct 02 00:40:15 old-k8s-version-920941 kubelet[657]: E1002 00:40:15.037929     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	  Oct 02 00:40:15 old-k8s-version-920941 kubelet[657]: E1002 00:40:15.037929     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.970454 1957595 out.go:270]   Oct 02 00:40:20 old-k8s-version-920941 kubelet[657]: E1002 00:40:20.036677     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 02 00:40:20 old-k8s-version-920941 kubelet[657]: E1002 00:40:20.036677     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.970464 1957595 out.go:270]   Oct 02 00:40:27 old-k8s-version-920941 kubelet[657]: E1002 00:40:27.036953     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	  Oct 02 00:40:27 old-k8s-version-920941 kubelet[657]: E1002 00:40:27.036953     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	I1002 00:40:27.970475 1957595 out.go:358] Setting ErrFile to fd 2...
	I1002 00:40:27.970482 1957595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:40:37.971296 1957595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:40:37.984057 1957595 api_server.go:72] duration metric: took 6m0.754066339s to wait for apiserver process to appear ...
	I1002 00:40:37.984081 1957595 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:40:37.986391 1957595 out.go:201] 
	W1002 00:40:37.988342 1957595 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W1002 00:40:37.988362 1957595 out.go:270] * 
	* 
	W1002 00:40:37.990636 1957595 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 00:40:37.993660 1957595 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-920941 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-920941
helpers_test.go:235: (dbg) docker inspect old-k8s-version-920941:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e4739c43a663a03a580e4c0a06ade93199a8e08b579ae89b9ddb6a28069fcfd",
	        "Created": "2024-10-02T00:31:52.317877874Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1957841,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-02T00:34:29.880574444Z",
	            "FinishedAt": "2024-10-02T00:34:28.805138361Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/2e4739c43a663a03a580e4c0a06ade93199a8e08b579ae89b9ddb6a28069fcfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e4739c43a663a03a580e4c0a06ade93199a8e08b579ae89b9ddb6a28069fcfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e4739c43a663a03a580e4c0a06ade93199a8e08b579ae89b9ddb6a28069fcfd/hosts",
	        "LogPath": "/var/lib/docker/containers/2e4739c43a663a03a580e4c0a06ade93199a8e08b579ae89b9ddb6a28069fcfd/2e4739c43a663a03a580e4c0a06ade93199a8e08b579ae89b9ddb6a28069fcfd-json.log",
	        "Name": "/old-k8s-version-920941",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-920941:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-920941",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b27fc6b18db023e896bcc76a659701aae10b656608703466a3718034425266cc-init/diff:/var/lib/docker/overlay2/f36fd63656976433bbd6b304cfd5552e0c71ee74203e3ec14aaa10779b0a0aa6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b27fc6b18db023e896bcc76a659701aae10b656608703466a3718034425266cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b27fc6b18db023e896bcc76a659701aae10b656608703466a3718034425266cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b27fc6b18db023e896bcc76a659701aae10b656608703466a3718034425266cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-920941",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-920941/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-920941",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-920941",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-920941",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b11eac41911539573af28daa1cc44899d38a0fd2008fb94c1df8f6f201d17579",
	            "SandboxKey": "/var/run/docker/netns/b11eac419115",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34955"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34956"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34959"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34957"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34958"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-920941": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8f85093bb2a5c243641274272235510daba8fccf2850e4cdeb2fb2978b0f1271",
	                    "EndpointID": "333e363ce59735470d618de13586caa26a7b6577ba9f4df95dcb23c4b87c1d55",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-920941",
	                        "2e4739c43a66"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-920941 -n old-k8s-version-920941
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-920941 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-920941 logs -n 25: (1.991684294s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-env-378598                            | force-systemd-env-378598 | jenkins | v1.34.0 | 02 Oct 24 00:30 UTC | 02 Oct 24 00:31 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| pause   | -p pause-489543                                        | pause-489543             | jenkins | v1.34.0 | 02 Oct 24 00:30 UTC | 02 Oct 24 00:30 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| unpause | -p pause-489543                                        | pause-489543             | jenkins | v1.34.0 | 02 Oct 24 00:30 UTC | 02 Oct 24 00:30 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| pause   | -p pause-489543                                        | pause-489543             | jenkins | v1.34.0 | 02 Oct 24 00:30 UTC | 02 Oct 24 00:30 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| delete  | -p pause-489543                                        | pause-489543             | jenkins | v1.34.0 | 02 Oct 24 00:30 UTC | 02 Oct 24 00:30 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| delete  | -p pause-489543                                        | pause-489543             | jenkins | v1.34.0 | 02 Oct 24 00:30 UTC | 02 Oct 24 00:30 UTC |
	| start   | -p cert-expiration-603955                              | cert-expiration-603955   | jenkins | v1.34.0 | 02 Oct 24 00:30 UTC | 02 Oct 24 00:31 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-378598                               | force-systemd-env-378598 | jenkins | v1.34.0 | 02 Oct 24 00:31 UTC | 02 Oct 24 00:31 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-378598                            | force-systemd-env-378598 | jenkins | v1.34.0 | 02 Oct 24 00:31 UTC | 02 Oct 24 00:31 UTC |
	| start   | -p cert-options-043656                                 | cert-options-043656      | jenkins | v1.34.0 | 02 Oct 24 00:31 UTC | 02 Oct 24 00:31 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-043656 ssh                                | cert-options-043656      | jenkins | v1.34.0 | 02 Oct 24 00:31 UTC | 02 Oct 24 00:31 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-043656 -- sudo                         | cert-options-043656      | jenkins | v1.34.0 | 02 Oct 24 00:31 UTC | 02 Oct 24 00:31 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-043656                                 | cert-options-043656      | jenkins | v1.34.0 | 02 Oct 24 00:31 UTC | 02 Oct 24 00:31 UTC |
	| start   | -p old-k8s-version-920941                              | old-k8s-version-920941   | jenkins | v1.34.0 | 02 Oct 24 00:31 UTC | 02 Oct 24 00:34 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-920941        | old-k8s-version-920941   | jenkins | v1.34.0 | 02 Oct 24 00:34 UTC | 02 Oct 24 00:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-920941                              | old-k8s-version-920941   | jenkins | v1.34.0 | 02 Oct 24 00:34 UTC | 02 Oct 24 00:34 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| start   | -p cert-expiration-603955                              | cert-expiration-603955   | jenkins | v1.34.0 | 02 Oct 24 00:34 UTC | 02 Oct 24 00:34 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-603955                              | cert-expiration-603955   | jenkins | v1.34.0 | 02 Oct 24 00:34 UTC | 02 Oct 24 00:34 UTC |
	| addons  | enable dashboard -p old-k8s-version-920941             | old-k8s-version-920941   | jenkins | v1.34.0 | 02 Oct 24 00:34 UTC | 02 Oct 24 00:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-920941                              | old-k8s-version-920941   | jenkins | v1.34.0 | 02 Oct 24 00:34 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p no-preload-643266                                   | no-preload-643266        | jenkins | v1.34.0 | 02 Oct 24 00:34 UTC | 02 Oct 24 00:35 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-643266             | no-preload-643266        | jenkins | v1.34.0 | 02 Oct 24 00:35 UTC | 02 Oct 24 00:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-643266                                   | no-preload-643266        | jenkins | v1.34.0 | 02 Oct 24 00:35 UTC | 02 Oct 24 00:36 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-643266                  | no-preload-643266        | jenkins | v1.34.0 | 02 Oct 24 00:36 UTC | 02 Oct 24 00:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-643266                                   | no-preload-643266        | jenkins | v1.34.0 | 02 Oct 24 00:36 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/02 00:36:03
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 00:36:03.012083 1964993 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:36:03.012257 1964993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:36:03.012287 1964993 out.go:358] Setting ErrFile to fd 2...
	I1002 00:36:03.012311 1964993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:36:03.012630 1964993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	I1002 00:36:03.013100 1964993 out.go:352] Setting JSON to false
	I1002 00:36:03.014388 1964993 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":29910,"bootTime":1727799453,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 00:36:03.014474 1964993 start.go:139] virtualization:  
	I1002 00:36:03.017767 1964993 out.go:177] * [no-preload-643266] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1002 00:36:03.019801 1964993 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:36:03.019869 1964993 notify.go:220] Checking for updates...
	I1002 00:36:03.024496 1964993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:36:03.026854 1964993 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1002 00:36:03.029244 1964993 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	I1002 00:36:03.031557 1964993 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 00:36:03.033819 1964993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:36:03.040196 1964993 config.go:182] Loaded profile config "no-preload-643266": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1002 00:36:03.040787 1964993 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:36:03.069802 1964993 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1002 00:36:03.069937 1964993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:36:03.127957 1964993 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-02 00:36:03.116885482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:36:03.128076 1964993 docker.go:318] overlay module found
	I1002 00:36:03.130835 1964993 out.go:177] * Using the docker driver based on existing profile
	I1002 00:36:03.133066 1964993 start.go:297] selected driver: docker
	I1002 00:36:03.133085 1964993 start.go:901] validating driver "docker" against &{Name:no-preload-643266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-643266 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:36:03.133196 1964993 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:36:03.133870 1964993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:36:03.193480 1964993 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-02 00:36:03.183554042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:36:03.193888 1964993 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:36:03.193918 1964993 cni.go:84] Creating CNI manager for ""
	I1002 00:36:03.193963 1964993 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 00:36:03.194008 1964993 start.go:340] cluster config:
	{Name:no-preload-643266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-643266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:36:03.197302 1964993 out.go:177] * Starting "no-preload-643266" primary control-plane node in "no-preload-643266" cluster
	I1002 00:36:03.199404 1964993 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1002 00:36:03.201267 1964993 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1002 00:36:03.203447 1964993 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1002 00:36:03.203558 1964993 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1002 00:36:03.203590 1964993 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/config.json ...
	I1002 00:36:03.203889 1964993 cache.go:107] acquiring lock: {Name:mk954e43b89743f42b2de507ef841acc0ca39d6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:36:03.203972 1964993 cache.go:115] /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 00:36:03.203986 1964993 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.193µs
	I1002 00:36:03.203994 1964993 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 00:36:03.204011 1964993 cache.go:107] acquiring lock: {Name:mk9bb3871f3086783b64ae807438df1fba64e594 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:36:03.204048 1964993 cache.go:115] /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1002 00:36:03.204058 1964993 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 49.156µs
	I1002 00:36:03.204064 1964993 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1002 00:36:03.204074 1964993 cache.go:107] acquiring lock: {Name:mk4c51eaf8981d34d6598e6b8db8db3dd288a37a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:36:03.204102 1964993 cache.go:115] /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1002 00:36:03.204111 1964993 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 38.26µs
	I1002 00:36:03.204118 1964993 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1002 00:36:03.204128 1964993 cache.go:107] acquiring lock: {Name:mkf2bf7103c8cfa890588d699e478e04e3da1e42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:36:03.204164 1964993 cache.go:115] /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1002 00:36:03.204174 1964993 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 47.153µs
	I1002 00:36:03.204180 1964993 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1002 00:36:03.204189 1964993 cache.go:107] acquiring lock: {Name:mk2fc743a4b7e5a56565f05839bd833f22721f05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:36:03.204220 1964993 cache.go:115] /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1002 00:36:03.204229 1964993 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 41.468µs
	I1002 00:36:03.204236 1964993 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1002 00:36:03.204245 1964993 cache.go:107] acquiring lock: {Name:mkaad64734fc1f6a80290eb17797507eac33c257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:36:03.204282 1964993 cache.go:115] /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1002 00:36:03.204364 1964993 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 88.236µs
	I1002 00:36:03.204379 1964993 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1002 00:36:03.204405 1964993 cache.go:107] acquiring lock: {Name:mkb1d087e33a18944023127d63db103ec03d9d82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:36:03.204496 1964993 cache.go:115] /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1002 00:36:03.204509 1964993 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 118.127µs
	I1002 00:36:03.204532 1964993 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1002 00:36:03.204556 1964993 cache.go:107] acquiring lock: {Name:mk184f44675ab76a6f5b3df430dcbfedf4db9cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:36:03.204597 1964993 cache.go:115] /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1002 00:36:03.204607 1964993 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 53.439µs
	I1002 00:36:03.204613 1964993 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1002 00:36:03.204631 1964993 cache.go:87] Successfully saved all images to host disk.
	I1002 00:36:03.228593 1964993 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1002 00:36:03.228613 1964993 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1002 00:36:03.228633 1964993 cache.go:194] Successfully downloaded all kic artifacts
	I1002 00:36:03.228664 1964993 start.go:360] acquireMachinesLock for no-preload-643266: {Name:mk2ae7d5f7b92344b0265dafab4f316472219fd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:36:03.228727 1964993 start.go:364] duration metric: took 47.909µs to acquireMachinesLock for "no-preload-643266"
	I1002 00:36:03.228748 1964993 start.go:96] Skipping create...Using existing machine configuration
	I1002 00:36:03.228754 1964993 fix.go:54] fixHost starting: 
	I1002 00:36:03.229109 1964993 cli_runner.go:164] Run: docker container inspect no-preload-643266 --format={{.State.Status}}
	I1002 00:36:03.246840 1964993 fix.go:112] recreateIfNeeded on no-preload-643266: state=Stopped err=<nil>
	W1002 00:36:03.246883 1964993 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 00:36:03.249430 1964993 out.go:177] * Restarting existing docker container for "no-preload-643266" ...
	I1002 00:35:59.716913 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:01.717482 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:03.718388 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:03.251587 1964993 cli_runner.go:164] Run: docker start no-preload-643266
	I1002 00:36:03.565967 1964993 cli_runner.go:164] Run: docker container inspect no-preload-643266 --format={{.State.Status}}
	I1002 00:36:03.589683 1964993 kic.go:430] container "no-preload-643266" state is running.
	I1002 00:36:03.590138 1964993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643266
	I1002 00:36:03.615667 1964993 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/config.json ...
	I1002 00:36:03.615884 1964993 machine.go:93] provisionDockerMachine start ...
	I1002 00:36:03.615949 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:03.637821 1964993 main.go:141] libmachine: Using SSH client type: native
	I1002 00:36:03.638085 1964993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34965 <nil> <nil>}
	I1002 00:36:03.638095 1964993 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 00:36:03.638854 1964993 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 00:36:06.776014 1964993 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643266
	
	I1002 00:36:06.776038 1964993 ubuntu.go:169] provisioning hostname "no-preload-643266"
	I1002 00:36:06.776156 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:06.794936 1964993 main.go:141] libmachine: Using SSH client type: native
	I1002 00:36:06.795219 1964993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34965 <nil> <nil>}
	I1002 00:36:06.795238 1964993 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-643266 && echo "no-preload-643266" | sudo tee /etc/hostname
	I1002 00:36:06.953429 1964993 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643266
	
	I1002 00:36:06.953517 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:06.970857 1964993 main.go:141] libmachine: Using SSH client type: native
	I1002 00:36:06.971123 1964993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34965 <nil> <nil>}
	I1002 00:36:06.971147 1964993 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-643266' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-643266/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-643266' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:36:07.108490 1964993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:36:07.108517 1964993 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19740-1745120/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-1745120/.minikube}
	I1002 00:36:07.108549 1964993 ubuntu.go:177] setting up certificates
	I1002 00:36:07.108558 1964993 provision.go:84] configureAuth start
	I1002 00:36:07.108618 1964993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643266
	I1002 00:36:07.125138 1964993 provision.go:143] copyHostCerts
	I1002 00:36:07.125203 1964993 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.pem, removing ...
	I1002 00:36:07.125224 1964993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.pem
	I1002 00:36:07.125297 1964993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.pem (1082 bytes)
	I1002 00:36:07.125411 1964993 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-1745120/.minikube/cert.pem, removing ...
	I1002 00:36:07.125422 1964993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-1745120/.minikube/cert.pem
	I1002 00:36:07.125449 1964993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-1745120/.minikube/cert.pem (1123 bytes)
	I1002 00:36:07.125518 1964993 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-1745120/.minikube/key.pem, removing ...
	I1002 00:36:07.125527 1964993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-1745120/.minikube/key.pem
	I1002 00:36:07.125554 1964993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-1745120/.minikube/key.pem (1675 bytes)
	I1002 00:36:07.125606 1964993 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca-key.pem org=jenkins.no-preload-643266 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-643266]
	I1002 00:36:07.502004 1964993 provision.go:177] copyRemoteCerts
	I1002 00:36:07.502076 1964993 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:36:07.502127 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:07.519074 1964993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34965 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/no-preload-643266/id_rsa Username:docker}
	I1002 00:36:07.618148 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 00:36:07.645121 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 00:36:07.671758 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 00:36:07.698778 1964993 provision.go:87] duration metric: took 590.196986ms to configureAuth
	I1002 00:36:07.698848 1964993 ubuntu.go:193] setting minikube options for container-runtime
	I1002 00:36:07.699078 1964993 config.go:182] Loaded profile config "no-preload-643266": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1002 00:36:07.699093 1964993 machine.go:96] duration metric: took 4.083201255s to provisionDockerMachine
	I1002 00:36:07.699102 1964993 start.go:293] postStartSetup for "no-preload-643266" (driver="docker")
	I1002 00:36:07.699113 1964993 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 00:36:07.699170 1964993 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 00:36:07.699219 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:07.721268 1964993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34965 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/no-preload-643266/id_rsa Username:docker}
	I1002 00:36:07.824821 1964993 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 00:36:07.829260 1964993 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 00:36:07.829331 1964993 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 00:36:07.829349 1964993 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 00:36:07.829372 1964993 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1002 00:36:07.829387 1964993 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-1745120/.minikube/addons for local assets ...
	I1002 00:36:07.829460 1964993 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-1745120/.minikube/files for local assets ...
	I1002 00:36:07.829565 1964993 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-1745120/.minikube/files/etc/ssl/certs/17505052.pem -> 17505052.pem in /etc/ssl/certs
	I1002 00:36:07.829705 1964993 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 00:36:07.838575 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/files/etc/ssl/certs/17505052.pem --> /etc/ssl/certs/17505052.pem (1708 bytes)
	I1002 00:36:07.863950 1964993 start.go:296] duration metric: took 164.830894ms for postStartSetup
	I1002 00:36:07.864034 1964993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:36:07.864075 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:07.881094 1964993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34965 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/no-preload-643266/id_rsa Username:docker}
	I1002 00:36:07.974091 1964993 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 00:36:07.978822 1964993 fix.go:56] duration metric: took 4.750054323s for fixHost
	I1002 00:36:07.978847 1964993 start.go:83] releasing machines lock for "no-preload-643266", held for 4.750111691s
	I1002 00:36:07.978920 1964993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643266
	I1002 00:36:07.994918 1964993 ssh_runner.go:195] Run: cat /version.json
	I1002 00:36:07.994989 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:07.995261 1964993 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 00:36:07.995320 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:08.017266 1964993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34965 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/no-preload-643266/id_rsa Username:docker}
	I1002 00:36:08.026641 1964993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34965 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/no-preload-643266/id_rsa Username:docker}
	I1002 00:36:08.111759 1964993 ssh_runner.go:195] Run: systemctl --version
	I1002 00:36:08.259449 1964993 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 00:36:08.264118 1964993 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1002 00:36:08.283453 1964993 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1002 00:36:08.283535 1964993 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 00:36:08.292611 1964993 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 00:36:08.292638 1964993 start.go:495] detecting cgroup driver to use...
	I1002 00:36:08.292672 1964993 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 00:36:08.292723 1964993 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 00:36:08.306650 1964993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 00:36:08.318562 1964993 docker.go:217] disabling cri-docker service (if available) ...
	I1002 00:36:08.318632 1964993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 00:36:08.332079 1964993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 00:36:08.343926 1964993 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 00:36:08.442764 1964993 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 00:36:08.539357 1964993 docker.go:233] disabling docker service ...
	I1002 00:36:08.539452 1964993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 00:36:08.552240 1964993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 00:36:08.563849 1964993 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 00:36:08.645932 1964993 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 00:36:08.761762 1964993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 00:36:08.787905 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 00:36:08.804861 1964993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1002 00:36:08.821875 1964993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 00:36:08.832817 1964993 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 00:36:08.832931 1964993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 00:36:08.843151 1964993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 00:36:08.852906 1964993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 00:36:08.862683 1964993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 00:36:08.872378 1964993 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 00:36:08.881765 1964993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 00:36:08.891575 1964993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 00:36:08.901977 1964993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 00:36:08.914260 1964993 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 00:36:08.923423 1964993 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 00:36:08.932264 1964993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:36:09.021911 1964993 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 00:36:09.175407 1964993 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 00:36:09.175526 1964993 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 00:36:09.179199 1964993 start.go:563] Will wait 60s for crictl version
	I1002 00:36:09.179305 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:36:09.182715 1964993 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 00:36:09.235874 1964993 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1002 00:36:09.236009 1964993 ssh_runner.go:195] Run: containerd --version
	I1002 00:36:09.263561 1964993 ssh_runner.go:195] Run: containerd --version
	I1002 00:36:09.287463 1964993 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1002 00:36:06.217010 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:08.775943 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:09.289624 1964993 cli_runner.go:164] Run: docker network inspect no-preload-643266 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 00:36:09.303638 1964993 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1002 00:36:09.307360 1964993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:36:09.318053 1964993 kubeadm.go:883] updating cluster {Name:no-preload-643266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-643266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 00:36:09.318185 1964993 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1002 00:36:09.318237 1964993 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:36:09.361704 1964993 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 00:36:09.361739 1964993 cache_images.go:84] Images are preloaded, skipping loading
	I1002 00:36:09.361748 1964993 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.31.1 containerd true true} ...
	I1002 00:36:09.361854 1964993 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-643266 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-643266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 00:36:09.361929 1964993 ssh_runner.go:195] Run: sudo crictl info
	I1002 00:36:09.400820 1964993 cni.go:84] Creating CNI manager for ""
	I1002 00:36:09.400895 1964993 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 00:36:09.400918 1964993 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 00:36:09.400970 1964993 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-643266 NodeName:no-preload-643266 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 00:36:09.401146 1964993 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-643266"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 00:36:09.401233 1964993 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1002 00:36:09.410461 1964993 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 00:36:09.410530 1964993 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 00:36:09.419198 1964993 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1002 00:36:09.437436 1964993 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 00:36:09.455642 1964993 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I1002 00:36:09.473653 1964993 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1002 00:36:09.476899 1964993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:36:09.488140 1964993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:36:09.575909 1964993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:36:09.592527 1964993 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266 for IP: 192.168.94.2
	I1002 00:36:09.592551 1964993 certs.go:194] generating shared ca certs ...
	I1002 00:36:09.592569 1964993 certs.go:226] acquiring lock for ca certs: {Name:mkeb93c689dc39169cb991acba6d63d702f9e0e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:36:09.592720 1964993 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.key
	I1002 00:36:09.592774 1964993 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.key
	I1002 00:36:09.592787 1964993 certs.go:256] generating profile certs ...
	I1002 00:36:09.592869 1964993 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.key
	I1002 00:36:09.592946 1964993 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/apiserver.key.73af85a7
	I1002 00:36:09.592995 1964993 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/proxy-client.key
	I1002 00:36:09.593165 1964993 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/1750505.pem (1338 bytes)
	W1002 00:36:09.593235 1964993 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/1750505_empty.pem, impossibly tiny 0 bytes
	I1002 00:36:09.593249 1964993 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 00:36:09.593278 1964993 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/ca.pem (1082 bytes)
	I1002 00:36:09.593306 1964993 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/cert.pem (1123 bytes)
	I1002 00:36:09.593335 1964993 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/key.pem (1675 bytes)
	I1002 00:36:09.593403 1964993 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-1745120/.minikube/files/etc/ssl/certs/17505052.pem (1708 bytes)
	I1002 00:36:09.594032 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 00:36:09.624825 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 00:36:09.662130 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 00:36:09.698284 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 00:36:09.742751 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 00:36:09.783139 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 00:36:09.827711 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 00:36:09.855644 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 00:36:09.885405 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/files/etc/ssl/certs/17505052.pem --> /usr/share/ca-certificates/17505052.pem (1708 bytes)
	I1002 00:36:09.920797 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 00:36:09.947679 1964993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-1745120/.minikube/certs/1750505.pem --> /usr/share/ca-certificates/1750505.pem (1338 bytes)
	I1002 00:36:09.972682 1964993 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 00:36:09.990802 1964993 ssh_runner.go:195] Run: openssl version
	I1002 00:36:09.999196 1964993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17505052.pem && ln -fs /usr/share/ca-certificates/17505052.pem /etc/ssl/certs/17505052.pem"
	I1002 00:36:10.011327 1964993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17505052.pem
	I1002 00:36:10.015466 1964993 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:54 /usr/share/ca-certificates/17505052.pem
	I1002 00:36:10.015596 1964993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17505052.pem
	I1002 00:36:10.024999 1964993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17505052.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 00:36:10.035682 1964993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 00:36:10.047088 1964993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:36:10.051373 1964993 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 23:43 /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:36:10.051476 1964993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:36:10.058839 1964993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 00:36:10.068022 1964993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1750505.pem && ln -fs /usr/share/ca-certificates/1750505.pem /etc/ssl/certs/1750505.pem"
	I1002 00:36:10.077605 1964993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1750505.pem
	I1002 00:36:10.081331 1964993 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:54 /usr/share/ca-certificates/1750505.pem
	I1002 00:36:10.081431 1964993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1750505.pem
	I1002 00:36:10.088670 1964993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1750505.pem /etc/ssl/certs/51391683.0"
	I1002 00:36:10.098430 1964993 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 00:36:10.103906 1964993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 00:36:10.111348 1964993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 00:36:10.118378 1964993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 00:36:10.125579 1964993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 00:36:10.133393 1964993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 00:36:10.141093 1964993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 00:36:10.148227 1964993 kubeadm.go:392] StartCluster: {Name:no-preload-643266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-643266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:36:10.148332 1964993 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 00:36:10.148412 1964993 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:36:10.185916 1964993 cri.go:89] found id: "4135867ed4acb2d1c331b95489e27f4be2485ad15403545148d1824e5f1c4e8f"
	I1002 00:36:10.185940 1964993 cri.go:89] found id: "1d500cb37211f32dd14aa83f29576fc4093384b3d5410f2ae3d57c4de3b64e1d"
	I1002 00:36:10.185946 1964993 cri.go:89] found id: "d07462c96835f050f80d7a77904167c7b1f098fb86b43d54b2a294050dc35701"
	I1002 00:36:10.185961 1964993 cri.go:89] found id: "a01e2f4ff5d8ed78b9df7324f5550f47a624ae296408c70204ff351a1b2e4ad0"
	I1002 00:36:10.185965 1964993 cri.go:89] found id: "7cb4fe5d533b8ec48fe48950e554c6aba769ee754aa13719bdb5078978fbfeb6"
	I1002 00:36:10.185968 1964993 cri.go:89] found id: "6d268abb1217cac0a576b1f76c6914b8c8d323cf46b9da078c742aeac3f2fb83"
	I1002 00:36:10.185972 1964993 cri.go:89] found id: "66577b32134293bb35dd337993a9291b47071541572d1e97c391dbf01350de33"
	I1002 00:36:10.185975 1964993 cri.go:89] found id: "200cc54dc91df970755d5924d60cfcdfeea81697ac0a49ddae44e074ffab6870"
	I1002 00:36:10.185985 1964993 cri.go:89] found id: ""
	I1002 00:36:10.186045 1964993 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1002 00:36:10.198519 1964993 cri.go:116] JSON = null
	W1002 00:36:10.198568 1964993 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1002 00:36:10.198648 1964993 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 00:36:10.207584 1964993 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1002 00:36:10.207602 1964993 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1002 00:36:10.207663 1964993 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 00:36:10.226306 1964993 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 00:36:10.226956 1964993 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-643266" does not appear in /home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1002 00:36:10.227265 1964993 kubeconfig.go:62] /home/jenkins/minikube-integration/19740-1745120/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-643266" cluster setting kubeconfig missing "no-preload-643266" context setting]
	I1002 00:36:10.227741 1964993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/kubeconfig: {Name:mk014bd742e0b0f4a72d987c0fd643ed22274647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:36:10.229536 1964993 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 00:36:10.242759 1964993 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I1002 00:36:10.242792 1964993 kubeadm.go:597] duration metric: took 35.183679ms to restartPrimaryControlPlane
	I1002 00:36:10.242802 1964993 kubeadm.go:394] duration metric: took 94.586363ms to StartCluster
	I1002 00:36:10.242817 1964993 settings.go:142] acquiring lock: {Name:mk200f8894606b147c1230e7434ca41f474a2cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:36:10.242901 1964993 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1002 00:36:10.243921 1964993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-1745120/kubeconfig: {Name:mk014bd742e0b0f4a72d987c0fd643ed22274647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:36:10.244128 1964993 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 00:36:10.244444 1964993 config.go:182] Loaded profile config "no-preload-643266": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1002 00:36:10.244625 1964993 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:36:10.244693 1964993 addons.go:69] Setting storage-provisioner=true in profile "no-preload-643266"
	I1002 00:36:10.244714 1964993 addons.go:234] Setting addon storage-provisioner=true in "no-preload-643266"
	W1002 00:36:10.244729 1964993 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:36:10.244754 1964993 host.go:66] Checking if "no-preload-643266" exists ...
	I1002 00:36:10.245249 1964993 cli_runner.go:164] Run: docker container inspect no-preload-643266 --format={{.State.Status}}
	I1002 00:36:10.245439 1964993 addons.go:69] Setting default-storageclass=true in profile "no-preload-643266"
	I1002 00:36:10.245459 1964993 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-643266"
	I1002 00:36:10.245724 1964993 cli_runner.go:164] Run: docker container inspect no-preload-643266 --format={{.State.Status}}
	I1002 00:36:10.245832 1964993 addons.go:69] Setting metrics-server=true in profile "no-preload-643266"
	I1002 00:36:10.245868 1964993 addons.go:234] Setting addon metrics-server=true in "no-preload-643266"
	W1002 00:36:10.245906 1964993 addons.go:243] addon metrics-server should already be in state true
	I1002 00:36:10.245951 1964993 host.go:66] Checking if "no-preload-643266" exists ...
	I1002 00:36:10.246498 1964993 cli_runner.go:164] Run: docker container inspect no-preload-643266 --format={{.State.Status}}
	I1002 00:36:10.248376 1964993 addons.go:69] Setting dashboard=true in profile "no-preload-643266"
	I1002 00:36:10.249100 1964993 addons.go:234] Setting addon dashboard=true in "no-preload-643266"
	W1002 00:36:10.249123 1964993 addons.go:243] addon dashboard should already be in state true
	I1002 00:36:10.249158 1964993 host.go:66] Checking if "no-preload-643266" exists ...
	I1002 00:36:10.249621 1964993 cli_runner.go:164] Run: docker container inspect no-preload-643266 --format={{.State.Status}}
	I1002 00:36:10.249075 1964993 out.go:177] * Verifying Kubernetes components...
	I1002 00:36:10.255058 1964993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:36:10.290895 1964993 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:36:10.295873 1964993 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:36:10.295897 1964993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:36:10.295966 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:10.318758 1964993 addons.go:234] Setting addon default-storageclass=true in "no-preload-643266"
	W1002 00:36:10.318783 1964993 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:36:10.318808 1964993 host.go:66] Checking if "no-preload-643266" exists ...
	I1002 00:36:10.319234 1964993 cli_runner.go:164] Run: docker container inspect no-preload-643266 --format={{.State.Status}}
	I1002 00:36:10.326350 1964993 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:36:10.339222 1964993 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:36:10.339265 1964993 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:36:10.339340 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:10.361995 1964993 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1002 00:36:10.363939 1964993 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 00:36:10.368746 1964993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34965 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/no-preload-643266/id_rsa Username:docker}
	I1002 00:36:10.372656 1964993 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:36:10.372677 1964993 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:36:10.372739 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:10.371528 1964993 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 00:36:10.372921 1964993 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 00:36:10.372958 1964993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643266
	I1002 00:36:10.392657 1964993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34965 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/no-preload-643266/id_rsa Username:docker}
	I1002 00:36:10.412698 1964993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34965 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/no-preload-643266/id_rsa Username:docker}
	I1002 00:36:10.437985 1964993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34965 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/no-preload-643266/id_rsa Username:docker}
	I1002 00:36:10.448906 1964993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:36:10.509441 1964993 node_ready.go:35] waiting up to 6m0s for node "no-preload-643266" to be "Ready" ...
	I1002 00:36:10.622826 1964993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:36:10.670692 1964993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:36:10.738654 1964993 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:36:10.738699 1964993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:36:10.800353 1964993 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 00:36:10.800382 1964993 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 00:36:10.861048 1964993 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:36:10.861085 1964993 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:36:10.919243 1964993 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 00:36:10.919278 1964993 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W1002 00:36:10.953419 1964993 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 00:36:10.953460 1964993 retry.go:31] will retry after 294.487568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 00:36:11.100923 1964993 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:36:11.100951 1964993 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:36:11.109648 1964993 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 00:36:11.109669 1964993 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W1002 00:36:11.195487 1964993 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 00:36:11.195522 1964993 retry.go:31] will retry after 249.86166ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 00:36:11.249088 1964993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:36:11.252548 1964993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:36:11.259216 1964993 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 00:36:11.259242 1964993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 00:36:11.395490 1964993 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 00:36:11.395519 1964993 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 00:36:11.445820 1964993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:36:11.602033 1964993 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 00:36:11.602071 1964993 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 00:36:11.829578 1964993 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 00:36:11.829605 1964993 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 00:36:11.855398 1964993 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 00:36:11.855447 1964993 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 00:36:11.903504 1964993 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:36:11.903531 1964993 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 00:36:11.953696 1964993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:36:11.217838 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:13.217946 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:15.558245 1964993 node_ready.go:49] node "no-preload-643266" has status "Ready":"True"
	I1002 00:36:15.558267 1964993 node_ready.go:38] duration metric: took 5.048772346s for node "no-preload-643266" to be "Ready" ...
	I1002 00:36:15.558277 1964993 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:36:15.610948 1964993 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tx6q7" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:15.715651 1964993 pod_ready.go:93] pod "coredns-7c65d6cfc9-tx6q7" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:15.715723 1964993 pod_ready.go:82] duration metric: took 104.687286ms for pod "coredns-7c65d6cfc9-tx6q7" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:15.715750 1964993 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-643266" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:15.757465 1964993 pod_ready.go:93] pod "etcd-no-preload-643266" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:15.757541 1964993 pod_ready.go:82] duration metric: took 41.770308ms for pod "etcd-no-preload-643266" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:15.757570 1964993 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-643266" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:15.811329 1964993 pod_ready.go:93] pod "kube-apiserver-no-preload-643266" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:15.811389 1964993 pod_ready.go:82] duration metric: took 53.79767ms for pod "kube-apiserver-no-preload-643266" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:15.811414 1964993 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-643266" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:15.833527 1964993 pod_ready.go:93] pod "kube-controller-manager-no-preload-643266" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:15.833599 1964993 pod_ready.go:82] duration metric: took 22.164478ms for pod "kube-controller-manager-no-preload-643266" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:15.833626 1964993 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n9zrc" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:15.840447 1964993 pod_ready.go:93] pod "kube-proxy-n9zrc" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:15.840529 1964993 pod_ready.go:82] duration metric: took 6.875263ms for pod "kube-proxy-n9zrc" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:15.840556 1964993 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-643266" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:17.848129 1964993 pod_ready.go:103] pod "kube-scheduler-no-preload-643266" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:18.757583 1964993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.508447612s)
	I1002 00:36:18.795470 1964993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.542881507s)
	I1002 00:36:18.795567 1964993 addons.go:475] Verifying addon metrics-server=true in "no-preload-643266"
	I1002 00:36:18.795655 1964993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.349800755s)
	I1002 00:36:18.795948 1964993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.842211532s)
	I1002 00:36:18.798304 1964993 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-643266 addons enable metrics-server
	
	I1002 00:36:18.804997 1964993 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1002 00:36:15.219050 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:17.223403 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:18.807018 1964993 addons.go:510] duration metric: took 8.562387094s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1002 00:36:20.346907 1964993 pod_ready.go:103] pod "kube-scheduler-no-preload-643266" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:22.347409 1964993 pod_ready.go:103] pod "kube-scheduler-no-preload-643266" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:19.717704 1957595 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:21.216971 1957595 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:21.216996 1957595 pod_ready.go:82] duration metric: took 1m11.006105841s for pod "kube-controller-manager-old-k8s-version-920941" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:21.217007 1957595 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-42b7q" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:21.221776 1957595 pod_ready.go:93] pod "kube-proxy-42b7q" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:21.221801 1957595 pod_ready.go:82] duration metric: took 4.785683ms for pod "kube-proxy-42b7q" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:21.221814 1957595 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-920941" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:23.227502 1957595 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:24.847175 1964993 pod_ready.go:103] pod "kube-scheduler-no-preload-643266" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:26.847423 1964993 pod_ready.go:103] pod "kube-scheduler-no-preload-643266" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:25.228486 1957595 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:26.228584 1957595 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-920941" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:26.228662 1957595 pod_ready.go:82] duration metric: took 5.006839188s for pod "kube-scheduler-old-k8s-version-920941" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:26.228688 1957595 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:28.234816 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:28.346732 1964993 pod_ready.go:93] pod "kube-scheduler-no-preload-643266" in "kube-system" namespace has status "Ready":"True"
	I1002 00:36:28.346805 1964993 pod_ready.go:82] duration metric: took 12.506228267s for pod "kube-scheduler-no-preload-643266" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:28.346831 1964993 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace to be "Ready" ...
	I1002 00:36:30.353473 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:32.353504 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:30.236092 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:32.736345 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:34.853505 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:37.353203 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:35.234991 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:37.235434 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:39.236103 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:39.853232 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:42.353771 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:41.236546 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:43.735666 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:44.853109 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:47.352279 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:46.235955 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:48.734957 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:49.352583 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:51.355860 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:50.735615 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:52.735648 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:53.852511 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:56.353123 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:55.236733 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:57.239178 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:58.853415 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:01.353115 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:36:59.734959 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:01.735219 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:04.237345 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:03.852235 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:05.853363 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:07.860997 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:06.735087 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:08.735842 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:10.353155 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:12.353356 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:11.235348 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:13.735763 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:14.853005 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:16.853544 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:16.234810 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:18.235343 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:19.353091 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:21.353914 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:20.735050 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:23.235940 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:23.852324 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:25.853117 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:27.853530 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:25.735045 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:28.234568 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:30.353115 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:32.353593 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:30.235503 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:32.735295 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:34.853499 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:37.352549 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:35.235317 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:37.734651 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:39.357458 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:41.853137 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:40.235325 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:42.235666 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:44.356991 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:46.852913 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:44.735462 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:47.234819 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:49.235228 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:49.353588 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:51.853048 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:51.734924 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:53.735129 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:53.853802 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:56.353155 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:56.234708 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:58.735319 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:37:58.853028 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:01.353740 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:01.234117 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:03.235524 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:03.852907 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:05.853327 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:05.735867 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:08.235269 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:08.352854 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:10.355097 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:12.853707 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:10.235566 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:12.735472 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:15.352858 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:17.353392 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:14.737310 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:17.234735 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:19.854973 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:22.353605 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:19.735217 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:21.735317 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:24.235157 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:24.852694 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:26.853139 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:26.734605 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:28.734860 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:28.853304 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:31.352962 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:30.735008 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:33.234148 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:33.852770 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:36.352821 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:35.234909 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:37.735168 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:38.361643 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:40.853153 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:42.853562 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:39.735540 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:41.735835 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:44.235027 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:45.352516 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:47.352683 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:46.734386 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:48.735126 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:49.352907 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:51.353053 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:50.736039 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:53.234472 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:53.853244 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:56.353621 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:55.234593 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:57.235198 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:59.235364 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:38:58.852889 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:01.352989 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:01.235569 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:03.735110 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:03.852333 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:05.852439 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:07.853669 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:06.235324 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:08.734702 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:09.854272 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:12.352949 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:10.736790 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:13.235130 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:14.853880 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:17.352077 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:15.235733 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:17.734764 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:19.352662 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:21.353365 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:19.734977 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:21.735890 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:24.235463 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:23.853180 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:26.353270 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:26.235565 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:28.735069 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:28.853590 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:31.353512 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:30.735329 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:32.736129 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:33.852933 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:35.853533 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:37.853771 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:35.235603 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:37.734912 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:40.353391 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:42.353774 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:39.736246 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:42.235540 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:44.853309 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:47.353347 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:44.735190 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:47.236303 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:49.852702 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:52.353743 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:49.736213 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:52.235593 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:54.852900 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:57.352486 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:54.735856 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:57.235708 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:59.352844 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:01.852856 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:39:59.735217 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:02.235341 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:04.235464 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:03.853388 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:06.353341 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:06.752875 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:09.235497 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:08.853298 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:11.352652 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:11.739045 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:14.234873 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:13.353618 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:15.852556 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:17.853234 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:16.734971 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:18.735025 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:19.853878 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:22.353470 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:21.234561 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:23.735307 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:24.852830 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:27.354837 1964993 pod_ready.go:103] pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:25.735630 1957595 pod_ready.go:103] pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace has status "Ready":"False"
	I1002 00:40:26.235407 1957595 pod_ready.go:82] duration metric: took 4m0.006692573s for pod "metrics-server-9975d5f86-49vwr" in "kube-system" namespace to be "Ready" ...
	E1002 00:40:26.235429 1957595 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:40:26.235438 1957595 pod_ready.go:39] duration metric: took 5m27.530349402s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:40:26.235452 1957595 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:40:26.235482 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:40:26.235571 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:40:26.281727 1957595 cri.go:89] found id: "0a25b599263d779d6664295d455c78ad67aae1ea393696c88eb6887fedc9bf29"
	I1002 00:40:26.281798 1957595 cri.go:89] found id: "11ec001392bd7c1b6e1804233a847099114357802451cb3dfc4f819481c9eac5"
	I1002 00:40:26.281808 1957595 cri.go:89] found id: ""
	I1002 00:40:26.281817 1957595 logs.go:282] 2 containers: [0a25b599263d779d6664295d455c78ad67aae1ea393696c88eb6887fedc9bf29 11ec001392bd7c1b6e1804233a847099114357802451cb3dfc4f819481c9eac5]
	I1002 00:40:26.281883 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.285748 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.289204 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1002 00:40:26.289276 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:40:26.330580 1957595 cri.go:89] found id: "ab6cea850151c0b10687176375d47fb249199d273efd44a9c5d6a1d39e3f623a"
	I1002 00:40:26.330601 1957595 cri.go:89] found id: "8c7992aa581c091fd0ff51c777b72d5212ca433204a6a429359b1bf731b055f4"
	I1002 00:40:26.330606 1957595 cri.go:89] found id: ""
	I1002 00:40:26.330613 1957595 logs.go:282] 2 containers: [ab6cea850151c0b10687176375d47fb249199d273efd44a9c5d6a1d39e3f623a 8c7992aa581c091fd0ff51c777b72d5212ca433204a6a429359b1bf731b055f4]
	I1002 00:40:26.330687 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.334426 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.338199 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1002 00:40:26.338294 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:40:26.383276 1957595 cri.go:89] found id: "9f4ef323c339f7d0dd395ca641e6b83c6ee625242e1e405bcb230f0280f833f9"
	I1002 00:40:26.383301 1957595 cri.go:89] found id: "8518536c30aef09017c911aee769571a3ca250e88cc9853905c824c874626cac"
	I1002 00:40:26.383306 1957595 cri.go:89] found id: ""
	I1002 00:40:26.383323 1957595 logs.go:282] 2 containers: [9f4ef323c339f7d0dd395ca641e6b83c6ee625242e1e405bcb230f0280f833f9 8518536c30aef09017c911aee769571a3ca250e88cc9853905c824c874626cac]
	I1002 00:40:26.383404 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.387099 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.390623 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:40:26.390695 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:40:26.427645 1957595 cri.go:89] found id: "87d3a4aa9e474d9d563b52794771a880f456688ecc28b5c9391254d7e388d601"
	I1002 00:40:26.427672 1957595 cri.go:89] found id: "19bd9ef04d15a6dccf5970eea7159536699fa21c02ae6b578cb94fb5a6c7af6a"
	I1002 00:40:26.427683 1957595 cri.go:89] found id: ""
	I1002 00:40:26.427691 1957595 logs.go:282] 2 containers: [87d3a4aa9e474d9d563b52794771a880f456688ecc28b5c9391254d7e388d601 19bd9ef04d15a6dccf5970eea7159536699fa21c02ae6b578cb94fb5a6c7af6a]
	I1002 00:40:26.427749 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.431220 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.434462 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:40:26.434533 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:40:26.474025 1957595 cri.go:89] found id: "8ff3cc680606db52e7aec936aa5a3da89b1f05ec9c72362a02657eefc93740a6"
	I1002 00:40:26.474046 1957595 cri.go:89] found id: "7d0046a7ae8533f8fe5ca98c95ba71a7292736d0effab8db2b682672e751ab5f"
	I1002 00:40:26.474051 1957595 cri.go:89] found id: ""
	I1002 00:40:26.474058 1957595 logs.go:282] 2 containers: [8ff3cc680606db52e7aec936aa5a3da89b1f05ec9c72362a02657eefc93740a6 7d0046a7ae8533f8fe5ca98c95ba71a7292736d0effab8db2b682672e751ab5f]
	I1002 00:40:26.474118 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.478508 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.482063 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:40:26.482131 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:40:26.529778 1957595 cri.go:89] found id: "ad357c0209d969730c7b744d5c2fbad0a7c610d618ddddb9a2fc771ce023b605"
	I1002 00:40:26.529803 1957595 cri.go:89] found id: "9882de920e5952a14a2e090a9a310cf476deae60c93e9ecd7f4384e8d6012f04"
	I1002 00:40:26.529809 1957595 cri.go:89] found id: ""
	I1002 00:40:26.529817 1957595 logs.go:282] 2 containers: [ad357c0209d969730c7b744d5c2fbad0a7c610d618ddddb9a2fc771ce023b605 9882de920e5952a14a2e090a9a310cf476deae60c93e9ecd7f4384e8d6012f04]
	I1002 00:40:26.529877 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.533446 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.536980 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1002 00:40:26.537109 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:40:26.578079 1957595 cri.go:89] found id: "a767e421df83a00b32383377cc55e4151e7bd6f66d1066e2c265fd0d78172cac"
	I1002 00:40:26.578103 1957595 cri.go:89] found id: "1051ac83c993493af1544df235a7ec482b2588f13293c80a3af3112eff956a41"
	I1002 00:40:26.578108 1957595 cri.go:89] found id: ""
	I1002 00:40:26.578116 1957595 logs.go:282] 2 containers: [a767e421df83a00b32383377cc55e4151e7bd6f66d1066e2c265fd0d78172cac 1051ac83c993493af1544df235a7ec482b2588f13293c80a3af3112eff956a41]
	I1002 00:40:26.578172 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.581929 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.585716 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:40:26.585837 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:40:26.634661 1957595 cri.go:89] found id: "287dac3a47db438435ee5f4915263ed26e3f6eae852bb618365b6490d95b4453"
	I1002 00:40:26.634692 1957595 cri.go:89] found id: "4a7d63ad64058ab8a816f95771113a6646e7f49235241d0b9c99ee631cdf7f38"
	I1002 00:40:26.634697 1957595 cri.go:89] found id: ""
	I1002 00:40:26.634704 1957595 logs.go:282] 2 containers: [287dac3a47db438435ee5f4915263ed26e3f6eae852bb618365b6490d95b4453 4a7d63ad64058ab8a816f95771113a6646e7f49235241d0b9c99ee631cdf7f38]
	I1002 00:40:26.634769 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.638476 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.641979 1957595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1002 00:40:26.642082 1957595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1002 00:40:26.684232 1957595 cri.go:89] found id: "6d607cf4c97f9850c9226c3b461edd448574512ffe663325bf684f333c1bef50"
	I1002 00:40:26.684295 1957595 cri.go:89] found id: ""
	I1002 00:40:26.684318 1957595 logs.go:282] 1 containers: [6d607cf4c97f9850c9226c3b461edd448574512ffe663325bf684f333c1bef50]
	I1002 00:40:26.684407 1957595 ssh_runner.go:195] Run: which crictl
	I1002 00:40:26.688115 1957595 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:40:26.688142 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:40:26.840427 1957595 logs.go:123] Gathering logs for kube-apiserver [0a25b599263d779d6664295d455c78ad67aae1ea393696c88eb6887fedc9bf29] ...
	I1002 00:40:26.840617 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a25b599263d779d6664295d455c78ad67aae1ea393696c88eb6887fedc9bf29"
	I1002 00:40:26.907015 1957595 logs.go:123] Gathering logs for etcd [ab6cea850151c0b10687176375d47fb249199d273efd44a9c5d6a1d39e3f623a] ...
	I1002 00:40:26.907065 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab6cea850151c0b10687176375d47fb249199d273efd44a9c5d6a1d39e3f623a"
	I1002 00:40:26.952130 1957595 logs.go:123] Gathering logs for coredns [9f4ef323c339f7d0dd395ca641e6b83c6ee625242e1e405bcb230f0280f833f9] ...
	I1002 00:40:26.952161 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f4ef323c339f7d0dd395ca641e6b83c6ee625242e1e405bcb230f0280f833f9"
	I1002 00:40:26.993101 1957595 logs.go:123] Gathering logs for coredns [8518536c30aef09017c911aee769571a3ca250e88cc9853905c824c874626cac] ...
	I1002 00:40:26.993131 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8518536c30aef09017c911aee769571a3ca250e88cc9853905c824c874626cac"
	I1002 00:40:27.051452 1957595 logs.go:123] Gathering logs for kube-proxy [8ff3cc680606db52e7aec936aa5a3da89b1f05ec9c72362a02657eefc93740a6] ...
	I1002 00:40:27.051531 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ff3cc680606db52e7aec936aa5a3da89b1f05ec9c72362a02657eefc93740a6"
	I1002 00:40:27.101863 1957595 logs.go:123] Gathering logs for kubelet ...
	I1002 00:40:27.101891 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1002 00:40:27.165499 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.403474     657 reflector.go:138] object-"kube-system"/"kube-proxy-token-72v75": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-72v75" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.165767 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.403634     657 reflector.go:138] object-"kube-system"/"kindnet-token-drxz7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-drxz7" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.165977 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.403804     657 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.166190 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.403951     657 reflector.go:138] object-"kube-system"/"coredns-token-p2g4h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-p2g4h" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.166401 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.406263     657 reflector.go:138] object-"default"/"default-token-lmpgm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-lmpgm" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.166604 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.406638     657 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.166826 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.406794     657 reflector.go:138] object-"kube-system"/"metrics-server-token-x2fml": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-x2fml" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.167052 1957595 logs.go:138] Found kubelet problem: Oct 02 00:34:58 old-k8s-version-920941 kubelet[657]: E1002 00:34:58.406955     657 reflector.go:138] object-"kube-system"/"storage-provisioner-token-2v6pp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-2v6pp" is forbidden: User "system:node:old-k8s-version-920941" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-920941' and this object
	W1002 00:40:27.175955 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:02 old-k8s-version-920941 kubelet[657]: E1002 00:35:02.816096     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1002 00:40:27.176152 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:03 old-k8s-version-920941 kubelet[657]: E1002 00:35:03.404518     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.179250 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:17 old-k8s-version-920941 kubelet[657]: E1002 00:35:17.054839     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1002 00:40:27.180905 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:24 old-k8s-version-920941 kubelet[657]: E1002 00:35:24.463670     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.181701 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:25 old-k8s-version-920941 kubelet[657]: E1002 00:35:25.467856     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.181887 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:28 old-k8s-version-920941 kubelet[657]: E1002 00:35:28.036655     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.182213 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:32 old-k8s-version-920941 kubelet[657]: E1002 00:35:32.484516     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.182651 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:32 old-k8s-version-920941 kubelet[657]: E1002 00:35:32.507888     657 pod_workers.go:191] Error syncing pod 4cf65c92-656f-422a-952b-05891b36cb68 ("storage-provisioner_kube-system(4cf65c92-656f-422a-952b-05891b36cb68)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4cf65c92-656f-422a-952b-05891b36cb68)"
	W1002 00:40:27.185453 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:39 old-k8s-version-920941 kubelet[657]: E1002 00:35:39.045417     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1002 00:40:27.186187 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:46 old-k8s-version-920941 kubelet[657]: E1002 00:35:46.553569     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.186376 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:50 old-k8s-version-920941 kubelet[657]: E1002 00:35:50.037065     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.186705 1957595 logs.go:138] Found kubelet problem: Oct 02 00:35:52 old-k8s-version-920941 kubelet[657]: E1002 00:35:52.484920     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.186890 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:01 old-k8s-version-920941 kubelet[657]: E1002 00:36:01.036920     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.187229 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:05 old-k8s-version-920941 kubelet[657]: E1002 00:36:05.038238     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.187413 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:14 old-k8s-version-920941 kubelet[657]: E1002 00:36:14.037429     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.187999 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:17 old-k8s-version-920941 kubelet[657]: E1002 00:36:17.639596     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.188326 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:22 old-k8s-version-920941 kubelet[657]: E1002 00:36:22.485499     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.190840 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:25 old-k8s-version-920941 kubelet[657]: E1002 00:36:25.067982     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1002 00:40:27.191173 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:36 old-k8s-version-920941 kubelet[657]: E1002 00:36:36.036206     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.191359 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:37 old-k8s-version-920941 kubelet[657]: E1002 00:36:37.038810     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.191686 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:49 old-k8s-version-920941 kubelet[657]: E1002 00:36:49.045805     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.191870 1957595 logs.go:138] Found kubelet problem: Oct 02 00:36:50 old-k8s-version-920941 kubelet[657]: E1002 00:36:50.037091     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.192053 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:03 old-k8s-version-920941 kubelet[657]: E1002 00:37:03.040054     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.192644 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:04 old-k8s-version-920941 kubelet[657]: E1002 00:37:04.765096     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.192971 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:12 old-k8s-version-920941 kubelet[657]: E1002 00:37:12.484569     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.193163 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:17 old-k8s-version-920941 kubelet[657]: E1002 00:37:17.036843     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.193488 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:27 old-k8s-version-920941 kubelet[657]: E1002 00:37:27.037002     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.193671 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:28 old-k8s-version-920941 kubelet[657]: E1002 00:37:28.036803     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.193857 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:40 old-k8s-version-920941 kubelet[657]: E1002 00:37:40.036708     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.194183 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:42 old-k8s-version-920941 kubelet[657]: E1002 00:37:42.036260     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.194508 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:54 old-k8s-version-920941 kubelet[657]: E1002 00:37:54.036836     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.196937 1957595 logs.go:138] Found kubelet problem: Oct 02 00:37:54 old-k8s-version-920941 kubelet[657]: E1002 00:37:54.044847     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1002 00:40:27.197264 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:05 old-k8s-version-920941 kubelet[657]: E1002 00:38:05.036582     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.197450 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:07 old-k8s-version-920941 kubelet[657]: E1002 00:38:07.040872     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.197779 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:17 old-k8s-version-920941 kubelet[657]: E1002 00:38:17.036860     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.197964 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:21 old-k8s-version-920941 kubelet[657]: E1002 00:38:21.036821     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.198554 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:29 old-k8s-version-920941 kubelet[657]: E1002 00:38:29.977771     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.198882 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:32 old-k8s-version-920941 kubelet[657]: E1002 00:38:32.484422     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.199066 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:36 old-k8s-version-920941 kubelet[657]: E1002 00:38:36.036591     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.199394 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:46 old-k8s-version-920941 kubelet[657]: E1002 00:38:46.036189     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.199582 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:50 old-k8s-version-920941 kubelet[657]: E1002 00:38:50.036961     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.199907 1957595 logs.go:138] Found kubelet problem: Oct 02 00:38:59 old-k8s-version-920941 kubelet[657]: E1002 00:38:59.040242     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.200091 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:04 old-k8s-version-920941 kubelet[657]: E1002 00:39:04.036603     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.200416 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:11 old-k8s-version-920941 kubelet[657]: E1002 00:39:11.036751     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.200607 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:17 old-k8s-version-920941 kubelet[657]: E1002 00:39:17.036726     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.200935 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:26 old-k8s-version-920941 kubelet[657]: E1002 00:39:26.036140     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.201122 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:28 old-k8s-version-920941 kubelet[657]: E1002 00:39:28.036705     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.201447 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:37 old-k8s-version-920941 kubelet[657]: E1002 00:39:37.039880     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.201635 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:42 old-k8s-version-920941 kubelet[657]: E1002 00:39:42.037320     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.201962 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:48 old-k8s-version-920941 kubelet[657]: E1002 00:39:48.036281     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.202147 1957595 logs.go:138] Found kubelet problem: Oct 02 00:39:55 old-k8s-version-920941 kubelet[657]: E1002 00:39:55.037598     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.202508 1957595 logs.go:138] Found kubelet problem: Oct 02 00:40:02 old-k8s-version-920941 kubelet[657]: E1002 00:40:02.036402     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.202694 1957595 logs.go:138] Found kubelet problem: Oct 02 00:40:08 old-k8s-version-920941 kubelet[657]: E1002 00:40:08.036742     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.203020 1957595 logs.go:138] Found kubelet problem: Oct 02 00:40:15 old-k8s-version-920941 kubelet[657]: E1002 00:40:15.037929     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.203205 1957595 logs.go:138] Found kubelet problem: Oct 02 00:40:20 old-k8s-version-920941 kubelet[657]: E1002 00:40:20.036677     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.203530 1957595 logs.go:138] Found kubelet problem: Oct 02 00:40:27 old-k8s-version-920941 kubelet[657]: E1002 00:40:27.036953     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	I1002 00:40:27.203544 1957595 logs.go:123] Gathering logs for kube-scheduler [19bd9ef04d15a6dccf5970eea7159536699fa21c02ae6b578cb94fb5a6c7af6a] ...
	I1002 00:40:27.203559 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19bd9ef04d15a6dccf5970eea7159536699fa21c02ae6b578cb94fb5a6c7af6a"
	I1002 00:40:27.270245 1957595 logs.go:123] Gathering logs for kindnet [a767e421df83a00b32383377cc55e4151e7bd6f66d1066e2c265fd0d78172cac] ...
	I1002 00:40:27.270278 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a767e421df83a00b32383377cc55e4151e7bd6f66d1066e2c265fd0d78172cac"
	I1002 00:40:27.330354 1957595 logs.go:123] Gathering logs for kubernetes-dashboard [6d607cf4c97f9850c9226c3b461edd448574512ffe663325bf684f333c1bef50] ...
	I1002 00:40:27.330384 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d607cf4c97f9850c9226c3b461edd448574512ffe663325bf684f333c1bef50"
	I1002 00:40:27.377052 1957595 logs.go:123] Gathering logs for dmesg ...
	I1002 00:40:27.377080 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:40:27.396038 1957595 logs.go:123] Gathering logs for kube-scheduler [87d3a4aa9e474d9d563b52794771a880f456688ecc28b5c9391254d7e388d601] ...
	I1002 00:40:27.396069 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87d3a4aa9e474d9d563b52794771a880f456688ecc28b5c9391254d7e388d601"
	I1002 00:40:27.434135 1957595 logs.go:123] Gathering logs for kube-controller-manager [ad357c0209d969730c7b744d5c2fbad0a7c610d618ddddb9a2fc771ce023b605] ...
	I1002 00:40:27.434161 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad357c0209d969730c7b744d5c2fbad0a7c610d618ddddb9a2fc771ce023b605"
	I1002 00:40:27.489634 1957595 logs.go:123] Gathering logs for kube-controller-manager [9882de920e5952a14a2e090a9a310cf476deae60c93e9ecd7f4384e8d6012f04] ...
	I1002 00:40:27.489667 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9882de920e5952a14a2e090a9a310cf476deae60c93e9ecd7f4384e8d6012f04"
	I1002 00:40:27.551020 1957595 logs.go:123] Gathering logs for containerd ...
	I1002 00:40:27.551058 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1002 00:40:27.615278 1957595 logs.go:123] Gathering logs for kube-apiserver [11ec001392bd7c1b6e1804233a847099114357802451cb3dfc4f819481c9eac5] ...
	I1002 00:40:27.615315 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11ec001392bd7c1b6e1804233a847099114357802451cb3dfc4f819481c9eac5"
	I1002 00:40:27.699876 1957595 logs.go:123] Gathering logs for etcd [8c7992aa581c091fd0ff51c777b72d5212ca433204a6a429359b1bf731b055f4] ...
	I1002 00:40:27.699910 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c7992aa581c091fd0ff51c777b72d5212ca433204a6a429359b1bf731b055f4"
	I1002 00:40:27.745907 1957595 logs.go:123] Gathering logs for kube-proxy [7d0046a7ae8533f8fe5ca98c95ba71a7292736d0effab8db2b682672e751ab5f] ...
	I1002 00:40:27.745938 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d0046a7ae8533f8fe5ca98c95ba71a7292736d0effab8db2b682672e751ab5f"
	I1002 00:40:27.790931 1957595 logs.go:123] Gathering logs for kindnet [1051ac83c993493af1544df235a7ec482b2588f13293c80a3af3112eff956a41] ...
	I1002 00:40:27.790957 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1051ac83c993493af1544df235a7ec482b2588f13293c80a3af3112eff956a41"
	I1002 00:40:27.841399 1957595 logs.go:123] Gathering logs for storage-provisioner [287dac3a47db438435ee5f4915263ed26e3f6eae852bb618365b6490d95b4453] ...
	I1002 00:40:27.841438 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 287dac3a47db438435ee5f4915263ed26e3f6eae852bb618365b6490d95b4453"
	I1002 00:40:27.882342 1957595 logs.go:123] Gathering logs for storage-provisioner [4a7d63ad64058ab8a816f95771113a6646e7f49235241d0b9c99ee631cdf7f38] ...
	I1002 00:40:27.882368 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a7d63ad64058ab8a816f95771113a6646e7f49235241d0b9c99ee631cdf7f38"
	I1002 00:40:27.925417 1957595 logs.go:123] Gathering logs for container status ...
	I1002 00:40:27.925443 1957595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:40:27.970197 1957595 out.go:358] Setting ErrFile to fd 2...
	I1002 00:40:27.970228 1957595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1002 00:40:27.970391 1957595 out.go:270] X Problems detected in kubelet:
	W1002 00:40:27.970409 1957595 out.go:270]   Oct 02 00:40:02 old-k8s-version-920941 kubelet[657]: E1002 00:40:02.036402     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.970430 1957595 out.go:270]   Oct 02 00:40:08 old-k8s-version-920941 kubelet[657]: E1002 00:40:08.036742     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.970447 1957595 out.go:270]   Oct 02 00:40:15 old-k8s-version-920941 kubelet[657]: E1002 00:40:15.037929     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	W1002 00:40:27.970454 1957595 out.go:270]   Oct 02 00:40:20 old-k8s-version-920941 kubelet[657]: E1002 00:40:20.036677     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1002 00:40:27.970464 1957595 out.go:270]   Oct 02 00:40:27 old-k8s-version-920941 kubelet[657]: E1002 00:40:27.036953     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	I1002 00:40:27.970475 1957595 out.go:358] Setting ErrFile to fd 2...
	I1002 00:40:27.970482 1957595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:40:28.353221 1964993 pod_ready.go:82] duration metric: took 4m0.006366195s for pod "metrics-server-6867b74b74-npt4v" in "kube-system" namespace to be "Ready" ...
	E1002 00:40:28.353248 1964993 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:40:28.353269 1964993 pod_ready.go:39] duration metric: took 4m12.794967836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:40:28.353285 1964993 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:40:28.353312 1964993 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:40:28.353380 1964993 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:40:28.390123 1964993 cri.go:89] found id: "b92c04d976cf457def1c3e1c5c1771c05abc92414cf9f58f2163d2062f92d1f0"
	I1002 00:40:28.390144 1964993 cri.go:89] found id: "66577b32134293bb35dd337993a9291b47071541572d1e97c391dbf01350de33"
	I1002 00:40:28.390148 1964993 cri.go:89] found id: ""
	I1002 00:40:28.390155 1964993 logs.go:282] 2 containers: [b92c04d976cf457def1c3e1c5c1771c05abc92414cf9f58f2163d2062f92d1f0 66577b32134293bb35dd337993a9291b47071541572d1e97c391dbf01350de33]
	I1002 00:40:28.390214 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.393760 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.397182 1964993 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1002 00:40:28.397304 1964993 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:40:28.435037 1964993 cri.go:89] found id: "2ccc95e83c5312e11677c1375a5c0f1586dc075112b8f728428c81f16a9c405f"
	I1002 00:40:28.435059 1964993 cri.go:89] found id: "6d268abb1217cac0a576b1f76c6914b8c8d323cf46b9da078c742aeac3f2fb83"
	I1002 00:40:28.435065 1964993 cri.go:89] found id: ""
	I1002 00:40:28.435072 1964993 logs.go:282] 2 containers: [2ccc95e83c5312e11677c1375a5c0f1586dc075112b8f728428c81f16a9c405f 6d268abb1217cac0a576b1f76c6914b8c8d323cf46b9da078c742aeac3f2fb83]
	I1002 00:40:28.435131 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.438951 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.442757 1964993 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1002 00:40:28.442833 1964993 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:40:28.480604 1964993 cri.go:89] found id: "67fc404ed205955a07aee2a954b09265bf5bbf16acfe7a61fec21fce8877e424"
	I1002 00:40:28.480679 1964993 cri.go:89] found id: "4135867ed4acb2d1c331b95489e27f4be2485ad15403545148d1824e5f1c4e8f"
	I1002 00:40:28.480697 1964993 cri.go:89] found id: ""
	I1002 00:40:28.480718 1964993 logs.go:282] 2 containers: [67fc404ed205955a07aee2a954b09265bf5bbf16acfe7a61fec21fce8877e424 4135867ed4acb2d1c331b95489e27f4be2485ad15403545148d1824e5f1c4e8f]
	I1002 00:40:28.480810 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.484617 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.487775 1964993 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:40:28.487877 1964993 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:40:28.529568 1964993 cri.go:89] found id: "85cff847d41fa4de144e1521354a74dcc9165e0016ff4f7621a11d88126994b1"
	I1002 00:40:28.529644 1964993 cri.go:89] found id: "7cb4fe5d533b8ec48fe48950e554c6aba769ee754aa13719bdb5078978fbfeb6"
	I1002 00:40:28.529661 1964993 cri.go:89] found id: ""
	I1002 00:40:28.529689 1964993 logs.go:282] 2 containers: [85cff847d41fa4de144e1521354a74dcc9165e0016ff4f7621a11d88126994b1 7cb4fe5d533b8ec48fe48950e554c6aba769ee754aa13719bdb5078978fbfeb6]
	I1002 00:40:28.529774 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.533653 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.537059 1964993 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:40:28.537176 1964993 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:40:28.576701 1964993 cri.go:89] found id: "bbfbf8b0371e2018a3aa70e3ccfd503c7b6af822245d5634a539533d6e973311"
	I1002 00:40:28.576730 1964993 cri.go:89] found id: "a01e2f4ff5d8ed78b9df7324f5550f47a624ae296408c70204ff351a1b2e4ad0"
	I1002 00:40:28.576735 1964993 cri.go:89] found id: ""
	I1002 00:40:28.576741 1964993 logs.go:282] 2 containers: [bbfbf8b0371e2018a3aa70e3ccfd503c7b6af822245d5634a539533d6e973311 a01e2f4ff5d8ed78b9df7324f5550f47a624ae296408c70204ff351a1b2e4ad0]
	I1002 00:40:28.576809 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.580712 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.584179 1964993 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:40:28.584247 1964993 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:40:28.620945 1964993 cri.go:89] found id: "17d9d1f1793828e7e751349d6de1e04094c67aff9a2d5ed6c6870371c9a84869"
	I1002 00:40:28.620969 1964993 cri.go:89] found id: "200cc54dc91df970755d5924d60cfcdfeea81697ac0a49ddae44e074ffab6870"
	I1002 00:40:28.620975 1964993 cri.go:89] found id: ""
	I1002 00:40:28.620982 1964993 logs.go:282] 2 containers: [17d9d1f1793828e7e751349d6de1e04094c67aff9a2d5ed6c6870371c9a84869 200cc54dc91df970755d5924d60cfcdfeea81697ac0a49ddae44e074ffab6870]
	I1002 00:40:28.621039 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.624993 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.628596 1964993 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1002 00:40:28.628664 1964993 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:40:28.668805 1964993 cri.go:89] found id: "31e1608032a84d4a051fe13e028dde8059dd432c004b00ccba658661d794cb83"
	I1002 00:40:28.668827 1964993 cri.go:89] found id: "1d500cb37211f32dd14aa83f29576fc4093384b3d5410f2ae3d57c4de3b64e1d"
	I1002 00:40:28.668833 1964993 cri.go:89] found id: ""
	I1002 00:40:28.668840 1964993 logs.go:282] 2 containers: [31e1608032a84d4a051fe13e028dde8059dd432c004b00ccba658661d794cb83 1d500cb37211f32dd14aa83f29576fc4093384b3d5410f2ae3d57c4de3b64e1d]
	I1002 00:40:28.668901 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.672431 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.675893 1964993 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1002 00:40:28.675960 1964993 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1002 00:40:28.725324 1964993 cri.go:89] found id: "f550f26320a85294a023e987d647be9aecf80bf2f5a96c5b6b333720dc42baf7"
	I1002 00:40:28.725345 1964993 cri.go:89] found id: ""
	I1002 00:40:28.725352 1964993 logs.go:282] 1 containers: [f550f26320a85294a023e987d647be9aecf80bf2f5a96c5b6b333720dc42baf7]
	I1002 00:40:28.725440 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.729159 1964993 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:40:28.729290 1964993 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:40:28.768966 1964993 cri.go:89] found id: "bd0ebafa0062b294e1dd7417d92f303381894417f4ed4fa5896092e479ee613e"
	I1002 00:40:28.768990 1964993 cri.go:89] found id: "35f838f0352a89eba32cb3c56ccbe98ed6425f0dc4716e448f82a2c770ae6d09"
	I1002 00:40:28.768995 1964993 cri.go:89] found id: ""
	I1002 00:40:28.769002 1964993 logs.go:282] 2 containers: [bd0ebafa0062b294e1dd7417d92f303381894417f4ed4fa5896092e479ee613e 35f838f0352a89eba32cb3c56ccbe98ed6425f0dc4716e448f82a2c770ae6d09]
	I1002 00:40:28.769081 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.772779 1964993 ssh_runner.go:195] Run: which crictl
	I1002 00:40:28.776082 1964993 logs.go:123] Gathering logs for dmesg ...
	I1002 00:40:28.776104 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:40:28.793105 1964993 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:40:28.793133 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:40:28.960868 1964993 logs.go:123] Gathering logs for coredns [67fc404ed205955a07aee2a954b09265bf5bbf16acfe7a61fec21fce8877e424] ...
	I1002 00:40:28.960903 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67fc404ed205955a07aee2a954b09265bf5bbf16acfe7a61fec21fce8877e424"
	I1002 00:40:29.002042 1964993 logs.go:123] Gathering logs for coredns [4135867ed4acb2d1c331b95489e27f4be2485ad15403545148d1824e5f1c4e8f] ...
	I1002 00:40:29.002129 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4135867ed4acb2d1c331b95489e27f4be2485ad15403545148d1824e5f1c4e8f"
	I1002 00:40:29.046160 1964993 logs.go:123] Gathering logs for kube-scheduler [7cb4fe5d533b8ec48fe48950e554c6aba769ee754aa13719bdb5078978fbfeb6] ...
	I1002 00:40:29.046187 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cb4fe5d533b8ec48fe48950e554c6aba769ee754aa13719bdb5078978fbfeb6"
	I1002 00:40:29.103707 1964993 logs.go:123] Gathering logs for kube-proxy [bbfbf8b0371e2018a3aa70e3ccfd503c7b6af822245d5634a539533d6e973311] ...
	I1002 00:40:29.103741 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbfbf8b0371e2018a3aa70e3ccfd503c7b6af822245d5634a539533d6e973311"
	I1002 00:40:29.149192 1964993 logs.go:123] Gathering logs for kube-controller-manager [17d9d1f1793828e7e751349d6de1e04094c67aff9a2d5ed6c6870371c9a84869] ...
	I1002 00:40:29.149220 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17d9d1f1793828e7e751349d6de1e04094c67aff9a2d5ed6c6870371c9a84869"
	I1002 00:40:29.220431 1964993 logs.go:123] Gathering logs for kubelet ...
	I1002 00:40:29.220497 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1002 00:40:29.261752 1964993 logs.go:138] Found kubelet problem: Oct 02 00:36:15 no-preload-643266 kubelet[657]: W1002 00:36:15.762374     657 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-643266" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-643266' and this object
	W1002 00:40:29.262023 1964993 logs.go:138] Found kubelet problem: Oct 02 00:36:15 no-preload-643266 kubelet[657]: E1002 00:36:15.762558     657 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-643266\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-643266' and this object" logger="UnhandledError"
	W1002 00:40:29.262204 1964993 logs.go:138] Found kubelet problem: Oct 02 00:36:15 no-preload-643266 kubelet[657]: W1002 00:36:15.762706     657 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-643266" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-643266' and this object
	W1002 00:40:29.262426 1964993 logs.go:138] Found kubelet problem: Oct 02 00:36:15 no-preload-643266 kubelet[657]: E1002 00:36:15.767953     657 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:no-preload-643266\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-643266' and this object" logger="UnhandledError"
	W1002 00:40:29.262612 1964993 logs.go:138] Found kubelet problem: Oct 02 00:36:15 no-preload-643266 kubelet[657]: W1002 00:36:15.769561     657 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-643266" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'no-preload-643266' and this object
	W1002 00:40:29.262842 1964993 logs.go:138] Found kubelet problem: Oct 02 00:36:15 no-preload-643266 kubelet[657]: E1002 00:36:15.769722     657 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-643266\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-643266' and this object" logger="UnhandledError"
	W1002 00:40:29.263024 1964993 logs.go:138] Found kubelet problem: Oct 02 00:36:15 no-preload-643266 kubelet[657]: W1002 00:36:15.771013     657 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-643266" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-643266' and this object
	W1002 00:40:29.263249 1964993 logs.go:138] Found kubelet problem: Oct 02 00:36:15 no-preload-643266 kubelet[657]: E1002 00:36:15.775699     657 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-643266\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-643266' and this object" logger="UnhandledError"
	W1002 00:40:29.270028 1964993 logs.go:138] Found kubelet problem: Oct 02 00:36:19 no-preload-643266 kubelet[657]: W1002 00:36:19.540537     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-643266" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-643266' and this object
	W1002 00:40:29.270266 1964993 logs.go:138] Found kubelet problem: Oct 02 00:36:19 no-preload-643266 kubelet[657]: E1002 00:36:19.541010     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-643266\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-643266' and this object" logger="UnhandledError"
	I1002 00:40:29.302777 1964993 logs.go:123] Gathering logs for storage-provisioner [bd0ebafa0062b294e1dd7417d92f303381894417f4ed4fa5896092e479ee613e] ...
	I1002 00:40:29.302803 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd0ebafa0062b294e1dd7417d92f303381894417f4ed4fa5896092e479ee613e"
	I1002 00:40:29.340391 1964993 logs.go:123] Gathering logs for containerd ...
	I1002 00:40:29.340417 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1002 00:40:29.405206 1964993 logs.go:123] Gathering logs for kubernetes-dashboard [f550f26320a85294a023e987d647be9aecf80bf2f5a96c5b6b333720dc42baf7] ...
	I1002 00:40:29.405242 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f550f26320a85294a023e987d647be9aecf80bf2f5a96c5b6b333720dc42baf7"
	I1002 00:40:29.448084 1964993 logs.go:123] Gathering logs for kube-controller-manager [200cc54dc91df970755d5924d60cfcdfeea81697ac0a49ddae44e074ffab6870] ...
	I1002 00:40:29.448112 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 200cc54dc91df970755d5924d60cfcdfeea81697ac0a49ddae44e074ffab6870"
	I1002 00:40:29.506527 1964993 logs.go:123] Gathering logs for storage-provisioner [35f838f0352a89eba32cb3c56ccbe98ed6425f0dc4716e448f82a2c770ae6d09] ...
	I1002 00:40:29.506557 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f838f0352a89eba32cb3c56ccbe98ed6425f0dc4716e448f82a2c770ae6d09"
	I1002 00:40:29.546099 1964993 logs.go:123] Gathering logs for container status ...
	I1002 00:40:29.546128 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:40:29.588244 1964993 logs.go:123] Gathering logs for kube-proxy [a01e2f4ff5d8ed78b9df7324f5550f47a624ae296408c70204ff351a1b2e4ad0] ...
	I1002 00:40:29.588274 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a01e2f4ff5d8ed78b9df7324f5550f47a624ae296408c70204ff351a1b2e4ad0"
	I1002 00:40:29.625627 1964993 logs.go:123] Gathering logs for etcd [6d268abb1217cac0a576b1f76c6914b8c8d323cf46b9da078c742aeac3f2fb83] ...
	I1002 00:40:29.625657 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d268abb1217cac0a576b1f76c6914b8c8d323cf46b9da078c742aeac3f2fb83"
	I1002 00:40:29.684615 1964993 logs.go:123] Gathering logs for kindnet [31e1608032a84d4a051fe13e028dde8059dd432c004b00ccba658661d794cb83] ...
	I1002 00:40:29.684690 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31e1608032a84d4a051fe13e028dde8059dd432c004b00ccba658661d794cb83"
	I1002 00:40:29.736303 1964993 logs.go:123] Gathering logs for kindnet [1d500cb37211f32dd14aa83f29576fc4093384b3d5410f2ae3d57c4de3b64e1d] ...
	I1002 00:40:29.736337 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d500cb37211f32dd14aa83f29576fc4093384b3d5410f2ae3d57c4de3b64e1d"
	I1002 00:40:29.776598 1964993 logs.go:123] Gathering logs for etcd [2ccc95e83c5312e11677c1375a5c0f1586dc075112b8f728428c81f16a9c405f] ...
	I1002 00:40:29.776626 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ccc95e83c5312e11677c1375a5c0f1586dc075112b8f728428c81f16a9c405f"
	I1002 00:40:29.831660 1964993 logs.go:123] Gathering logs for kube-apiserver [66577b32134293bb35dd337993a9291b47071541572d1e97c391dbf01350de33] ...
	I1002 00:40:29.831692 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66577b32134293bb35dd337993a9291b47071541572d1e97c391dbf01350de33"
	I1002 00:40:29.892484 1964993 logs.go:123] Gathering logs for kube-scheduler [85cff847d41fa4de144e1521354a74dcc9165e0016ff4f7621a11d88126994b1] ...
	I1002 00:40:29.892515 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85cff847d41fa4de144e1521354a74dcc9165e0016ff4f7621a11d88126994b1"
	I1002 00:40:29.948679 1964993 logs.go:123] Gathering logs for kube-apiserver [b92c04d976cf457def1c3e1c5c1771c05abc92414cf9f58f2163d2062f92d1f0] ...
	I1002 00:40:29.948711 1964993 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b92c04d976cf457def1c3e1c5c1771c05abc92414cf9f58f2163d2062f92d1f0"
	I1002 00:40:30.005895 1964993 out.go:358] Setting ErrFile to fd 2...
	I1002 00:40:30.005936 1964993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1002 00:40:30.006014 1964993 out.go:270] X Problems detected in kubelet:
	W1002 00:40:30.006039 1964993 out.go:270]   Oct 02 00:36:15 no-preload-643266 kubelet[657]: E1002 00:36:15.769722     657 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-643266\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-643266' and this object" logger="UnhandledError"
	W1002 00:40:30.006050 1964993 out.go:270]   Oct 02 00:36:15 no-preload-643266 kubelet[657]: W1002 00:36:15.771013     657 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-643266" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-643266' and this object
	W1002 00:40:30.006211 1964993 out.go:270]   Oct 02 00:36:15 no-preload-643266 kubelet[657]: E1002 00:36:15.775699     657 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-643266\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-643266' and this object" logger="UnhandledError"
	W1002 00:40:30.006219 1964993 out.go:270]   Oct 02 00:36:19 no-preload-643266 kubelet[657]: W1002 00:36:19.540537     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-643266" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-643266' and this object
	W1002 00:40:30.006229 1964993 out.go:270]   Oct 02 00:36:19 no-preload-643266 kubelet[657]: E1002 00:36:19.541010     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-643266\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-643266' and this object" logger="UnhandledError"
	I1002 00:40:30.006236 1964993 out.go:358] Setting ErrFile to fd 2...
	I1002 00:40:30.006250 1964993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:40:37.971296 1957595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:40:37.984057 1957595 api_server.go:72] duration metric: took 6m0.754066339s to wait for apiserver process to appear ...
	I1002 00:40:37.984081 1957595 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:40:37.986391 1957595 out.go:201] 
	W1002 00:40:37.988342 1957595 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W1002 00:40:37.988362 1957595 out.go:270] * 
	W1002 00:40:37.990636 1957595 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 00:40:37.993660 1957595 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	e948e7eda020f       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   da3dc8774e99b       dashboard-metrics-scraper-8d5bb5db8-hlf6t
	287dac3a47db4       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   c3e6dff0be9a3       storage-provisioner
	6d607cf4c97f9       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   0dc01871d3f89       kubernetes-dashboard-cd95d586-g94cm
	a767e421df83a       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   d2288ac512f9c       kindnet-v2lm7
	56e926b48dece       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   86fc316cc65bc       busybox
	8ff3cc680606d       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   1b904193b1f7a       kube-proxy-42b7q
	4a7d63ad64058       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   c3e6dff0be9a3       storage-provisioner
	9f4ef323c339f       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   8f7c98e38cda8       coredns-74ff55c5b-nbdkx
	ad357c0209d96       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   e70333d2656ff       kube-controller-manager-old-k8s-version-920941
	0a25b599263d7       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   3ff57087a5249       kube-apiserver-old-k8s-version-920941
	87d3a4aa9e474       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   5a266de224a20       kube-scheduler-old-k8s-version-920941
	ab6cea850151c       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   c2c373dafef48       etcd-old-k8s-version-920941
	2850055cdbd3e       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   6c29055419581       busybox
	8518536c30aef       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   c21aeb3cb74e8       coredns-74ff55c5b-nbdkx
	1051ac83c9934       6a23fa8fd2b78       7 minutes ago       Exited              kindnet-cni                 0                   822f533cc9af0       kindnet-v2lm7
	7d0046a7ae853       25a5233254979       7 minutes ago       Exited              kube-proxy                  0                   02179b6c9ddbe       kube-proxy-42b7q
	9882de920e595       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   b33a4a46eabef       kube-controller-manager-old-k8s-version-920941
	19bd9ef04d15a       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   bca0adeadd888       kube-scheduler-old-k8s-version-920941
	11ec001392bd7       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   ff78d6443f77e       kube-apiserver-old-k8s-version-920941
	8c7992aa581c0       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   a0bc77fcf5647       etcd-old-k8s-version-920941
	
	
	==> containerd <==
	Oct 02 00:36:25 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:36:25.064537504Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 02 00:36:25 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:36:25.066446959Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 02 00:36:25 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:36:25.066540077Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 02 00:37:04 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:04.038999195Z" level=info msg="CreateContainer within sandbox \"da3dc8774e99b1dbdec62a899bbb1f81321683238c53cfb68f257e35c1450e6b\" for container name:\"dashboard-metrics-scraper\"  attempt:4"
	Oct 02 00:37:04 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:04.056549792Z" level=info msg="CreateContainer within sandbox \"da3dc8774e99b1dbdec62a899bbb1f81321683238c53cfb68f257e35c1450e6b\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"36fd8b5c62d3346cf48339fda72d27447be562ec33feb1a8f839b64bfe251fda\""
	Oct 02 00:37:04 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:04.057114539Z" level=info msg="StartContainer for \"36fd8b5c62d3346cf48339fda72d27447be562ec33feb1a8f839b64bfe251fda\""
	Oct 02 00:37:04 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:04.122311861Z" level=info msg="StartContainer for \"36fd8b5c62d3346cf48339fda72d27447be562ec33feb1a8f839b64bfe251fda\" returns successfully"
	Oct 02 00:37:04 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:04.147342188Z" level=info msg="shim disconnected" id=36fd8b5c62d3346cf48339fda72d27447be562ec33feb1a8f839b64bfe251fda namespace=k8s.io
	Oct 02 00:37:04 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:04.147401797Z" level=warning msg="cleaning up after shim disconnected" id=36fd8b5c62d3346cf48339fda72d27447be562ec33feb1a8f839b64bfe251fda namespace=k8s.io
	Oct 02 00:37:04 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:04.147413341Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 02 00:37:04 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:04.761958864Z" level=info msg="RemoveContainer for \"2b836a698cc796cedef8140bfcb36cf13b31caeba83f8b7595356d31a397f503\""
	Oct 02 00:37:04 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:04.768970714Z" level=info msg="RemoveContainer for \"2b836a698cc796cedef8140bfcb36cf13b31caeba83f8b7595356d31a397f503\" returns successfully"
	Oct 02 00:37:54 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:54.037266965Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 00:37:54 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:54.042595365Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 02 00:37:54 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:54.044193943Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 02 00:37:54 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:37:54.044219378Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 02 00:38:29 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:38:29.039015616Z" level=info msg="CreateContainer within sandbox \"da3dc8774e99b1dbdec62a899bbb1f81321683238c53cfb68f257e35c1450e6b\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Oct 02 00:38:29 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:38:29.055309577Z" level=info msg="CreateContainer within sandbox \"da3dc8774e99b1dbdec62a899bbb1f81321683238c53cfb68f257e35c1450e6b\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0\""
	Oct 02 00:38:29 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:38:29.056020421Z" level=info msg="StartContainer for \"e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0\""
	Oct 02 00:38:29 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:38:29.126820582Z" level=info msg="StartContainer for \"e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0\" returns successfully"
	Oct 02 00:38:29 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:38:29.152865541Z" level=info msg="shim disconnected" id=e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0 namespace=k8s.io
	Oct 02 00:38:29 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:38:29.152929876Z" level=warning msg="cleaning up after shim disconnected" id=e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0 namespace=k8s.io
	Oct 02 00:38:29 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:38:29.152940813Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 02 00:38:29 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:38:29.986281404Z" level=info msg="RemoveContainer for \"36fd8b5c62d3346cf48339fda72d27447be562ec33feb1a8f839b64bfe251fda\""
	Oct 02 00:38:29 old-k8s-version-920941 containerd[569]: time="2024-10-02T00:38:29.996216833Z" level=info msg="RemoveContainer for \"36fd8b5c62d3346cf48339fda72d27447be562ec33feb1a8f839b64bfe251fda\" returns successfully"
	
	
	==> coredns [8518536c30aef09017c911aee769571a3ca250e88cc9853905c824c874626cac] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:43479 - 1498 "HINFO IN 2804956054141908690.4456827025126775008. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03128646s
	
	
	==> coredns [9f4ef323c339f7d0dd395ca641e6b83c6ee625242e1e405bcb230f0280f833f9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:55831 - 62672 "HINFO IN 8662144659542547758.8324823098691210009. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020795834s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I1002 00:35:31.705040       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-02 00:35:01.704563485 +0000 UTC m=+0.173755769) (total time: 30.000372864s):
	Trace[2019727887]: [30.000372864s] [30.000372864s] END
	E1002 00:35:31.705074       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1002 00:35:31.705385       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-02 00:35:01.705078542 +0000 UTC m=+0.174270810) (total time: 30.000291717s):
	Trace[939984059]: [30.000291717s] [30.000291717s] END
	E1002 00:35:31.705396       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1002 00:35:31.712871       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-02 00:35:01.705108876 +0000 UTC m=+0.174301144) (total time: 30.007734959s):
	Trace[1474941318]: [30.007734959s] [30.007734959s] END
	E1002 00:35:31.712954       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-920941
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-920941
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=old-k8s-version-920941
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_02T00_32_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 02 Oct 2024 00:32:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-920941
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 02 Oct 2024 00:40:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 02 Oct 2024 00:35:49 +0000   Wed, 02 Oct 2024 00:32:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 02 Oct 2024 00:35:49 +0000   Wed, 02 Oct 2024 00:32:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 02 Oct 2024 00:35:49 +0000   Wed, 02 Oct 2024 00:32:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 02 Oct 2024 00:35:49 +0000   Wed, 02 Oct 2024 00:32:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-920941
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 a423adbfa6564b80a682fc6c52b7ab13
	  System UUID:                82742919-1985-4b91-a1db-427cbcc79dea
	  Boot ID:                    3aa8f718-8507-41e8-80ca-0eb33f6ce70e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 coredns-74ff55c5b-nbdkx                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m53s
	  kube-system                 etcd-old-k8s-version-920941                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m59s
	  kube-system                 kindnet-v2lm7                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m53s
	  kube-system                 kube-apiserver-old-k8s-version-920941             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 kube-controller-manager-old-k8s-version-920941    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 kube-proxy-42b7q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 kube-scheduler-old-k8s-version-920941             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 metrics-server-9975d5f86-49vwr                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m22s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-hlf6t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-g94cm               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m18s (x5 over 8m18s)  kubelet     Node old-k8s-version-920941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m18s (x5 over 8m18s)  kubelet     Node old-k8s-version-920941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m18s (x4 over 8m18s)  kubelet     Node old-k8s-version-920941 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m59s                  kubelet     Node old-k8s-version-920941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m59s                  kubelet     Node old-k8s-version-920941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m59s                  kubelet     Node old-k8s-version-920941 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m53s                  kubelet     Node old-k8s-version-920941 status is now: NodeReady
	  Normal  Starting                 7m51s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m55s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m54s (x8 over 5m54s)  kubelet     Node old-k8s-version-920941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s (x7 over 5m54s)  kubelet     Node old-k8s-version-920941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s (x8 over 5m54s)  kubelet     Node old-k8s-version-920941 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m54s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m37s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Oct 1 23:13] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.010864] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.140336] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [8c7992aa581c091fd0ff51c777b72d5212ca433204a6a429359b1bf731b055f4] <==
	raft2024/10/02 00:32:21 INFO: ea7e25599daad906 switched to configuration voters=(16896983918768216326)
	2024-10-02 00:32:21.871227 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	raft2024/10/02 00:32:22 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/10/02 00:32:22 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/10/02 00:32:22 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/10/02 00:32:22 INFO: ea7e25599daad906 became leader at term 2
	raft2024/10/02 00:32:22 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-10-02 00:32:22.652854 I | etcdserver: published {Name:old-k8s-version-920941 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-10-02 00:32:22.653012 I | embed: ready to serve client requests
	2024-10-02 00:32:22.654538 I | embed: serving client requests on 192.168.76.2:2379
	2024-10-02 00:32:22.654608 I | etcdserver: setting up the initial cluster version to 3.4
	2024-10-02 00:32:22.655000 I | embed: ready to serve client requests
	2024-10-02 00:32:22.655069 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-10-02 00:32:22.655143 I | etcdserver/api: enabled capabilities for version 3.4
	2024-10-02 00:32:22.692358 I | embed: serving client requests on 127.0.0.1:2379
	2024-10-02 00:32:43.566829 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:32:49.659503 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:32:59.659863 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:33:09.659839 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:33:19.659633 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:33:29.659746 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:33:39.659634 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:33:49.659650 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:33:59.659628 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:34:09.659592 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [ab6cea850151c0b10687176375d47fb249199d273efd44a9c5d6a1d39e3f623a] <==
	2024-10-02 00:36:30.586826 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:36:40.586732 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:36:50.586769 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:37:00.586679 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:37:10.586627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:37:20.586763 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:37:30.586671 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:37:40.586683 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:37:50.586775 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:38:00.586887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:38:10.586699 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:38:20.586611 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:38:30.586698 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:38:40.586658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:38:50.586582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:39:00.586744 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:39:10.586725 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:39:20.586634 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:39:30.586822 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:39:40.586824 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:39:50.586702 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:40:00.586748 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:40:10.586879 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:40:20.586722 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-02 00:40:30.586821 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 00:40:39 up  8:23,  0 users,  load average: 0.81, 1.74, 2.34
	Linux old-k8s-version-920941 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1051ac83c993493af1544df235a7ec482b2588f13293c80a3af3112eff956a41] <==
	I1002 00:32:49.826800       1 main.go:148] setting mtu 1500 for CNI 
	I1002 00:32:49.826853       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 00:32:49.826891       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 00:32:50.125404       1 controller.go:334] Starting controller kube-network-policies
	I1002 00:32:50.125497       1 controller.go:338] Waiting for informer caches to sync
	I1002 00:32:50.125521       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1002 00:32:50.427332       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1002 00:32:50.427364       1 metrics.go:61] Registering metrics
	I1002 00:32:50.427452       1 controller.go:374] Syncing nftables rules
	I1002 00:33:00.125085       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:33:00.125174       1 main.go:299] handling current node
	I1002 00:33:10.125367       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:33:10.125402       1 main.go:299] handling current node
	I1002 00:33:20.133341       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:33:20.133376       1 main.go:299] handling current node
	I1002 00:33:30.133052       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:33:30.133091       1 main.go:299] handling current node
	I1002 00:33:40.125414       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:33:40.125449       1 main.go:299] handling current node
	I1002 00:33:50.125520       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:33:50.125555       1 main.go:299] handling current node
	I1002 00:34:00.132404       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:34:00.132443       1 main.go:299] handling current node
	I1002 00:34:10.125696       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:34:10.125802       1 main.go:299] handling current node
	
	
	==> kindnet [a767e421df83a00b32383377cc55e4151e7bd6f66d1066e2c265fd0d78172cac] <==
	I1002 00:38:33.357390       1 main.go:299] handling current node
	I1002 00:38:43.366526       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:38:43.366559       1 main.go:299] handling current node
	I1002 00:38:53.366579       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:38:53.366615       1 main.go:299] handling current node
	I1002 00:39:03.357975       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:39:03.358010       1 main.go:299] handling current node
	I1002 00:39:13.361206       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:39:13.361243       1 main.go:299] handling current node
	I1002 00:39:23.366130       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:39:23.366227       1 main.go:299] handling current node
	I1002 00:39:33.357724       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:39:33.357759       1 main.go:299] handling current node
	I1002 00:39:43.364748       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:39:43.364793       1 main.go:299] handling current node
	I1002 00:39:53.360549       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:39:53.360595       1 main.go:299] handling current node
	I1002 00:40:03.358330       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:40:03.358365       1 main.go:299] handling current node
	I1002 00:40:13.361398       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:40:13.361566       1 main.go:299] handling current node
	I1002 00:40:23.366446       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:40:23.366481       1 main.go:299] handling current node
	I1002 00:40:33.366134       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1002 00:40:33.366174       1 main.go:299] handling current node
	
	
	==> kube-apiserver [0a25b599263d779d6664295d455c78ad67aae1ea393696c88eb6887fedc9bf29] <==
	I1002 00:37:36.043855       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1002 00:37:36.043864       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1002 00:38:02.927619       1 handler_proxy.go:102] no RequestInfo found in the context
	E1002 00:38:02.927721       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 00:38:02.927754       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:38:15.184949       1 client.go:360] parsed scheme: "passthrough"
	I1002 00:38:15.185001       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1002 00:38:15.185012       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1002 00:38:50.458479       1 client.go:360] parsed scheme: "passthrough"
	I1002 00:38:50.458659       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1002 00:38:50.458676       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1002 00:39:22.037838       1 client.go:360] parsed scheme: "passthrough"
	I1002 00:39:22.037882       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1002 00:39:22.037892       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1002 00:39:55.475212       1 client.go:360] parsed scheme: "passthrough"
	I1002 00:39:55.475256       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1002 00:39:55.475264       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1002 00:39:59.397757       1 handler_proxy.go:102] no RequestInfo found in the context
	E1002 00:39:59.397841       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 00:39:59.397856       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:40:31.146661       1 client.go:360] parsed scheme: "passthrough"
	I1002 00:40:31.146707       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1002 00:40:31.146715       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [11ec001392bd7c1b6e1804233a847099114357802451cb3dfc4f819481c9eac5] <==
	I1002 00:32:29.162210       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1002 00:32:29.162500       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1002 00:32:29.190557       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1002 00:32:29.194718       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1002 00:32:29.194741       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1002 00:32:29.590760       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 00:32:29.631769       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1002 00:32:29.733113       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1002 00:32:29.734337       1 controller.go:606] quota admission added evaluator for: endpoints
	I1002 00:32:29.738227       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 00:32:30.784058       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1002 00:32:31.440975       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1002 00:32:31.565859       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1002 00:32:39.983851       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 00:32:46.887938       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1002 00:32:46.892419       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1002 00:32:59.052020       1 client.go:360] parsed scheme: "passthrough"
	I1002 00:32:59.052064       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1002 00:32:59.052072       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1002 00:33:40.472446       1 client.go:360] parsed scheme: "passthrough"
	I1002 00:33:40.472515       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1002 00:33:40.472524       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1002 00:34:15.092655       1 client.go:360] parsed scheme: "passthrough"
	I1002 00:34:15.092710       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1002 00:34:15.092882       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [9882de920e5952a14a2e090a9a310cf476deae60c93e9ecd7f4384e8d6012f04] <==
	I1002 00:32:46.891419       1 shared_informer.go:247] Caches are synced for HPA 
	I1002 00:32:46.891526       1 shared_informer.go:247] Caches are synced for resource quota 
	I1002 00:32:46.900428       1 shared_informer.go:247] Caches are synced for endpoint 
	I1002 00:32:46.900686       1 shared_informer.go:247] Caches are synced for GC 
	I1002 00:32:46.902543       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I1002 00:32:46.902763       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I1002 00:32:46.902874       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I1002 00:32:46.903730       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I1002 00:32:46.923515       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I1002 00:32:46.923556       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-bhtjh"
	I1002 00:32:46.924645       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1002 00:32:46.957710       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-v2lm7"
	I1002 00:32:46.990001       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-42b7q"
	I1002 00:32:46.991740       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-nbdkx"
	E1002 00:32:47.057176       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"691e6700-9f49-45d6-ac7c-11a3fa760c6b", ResourceVersion:"267", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63863425951, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40017ae300), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40017ae320)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x40017ae340), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40016d93c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017ae
360), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017ae380), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40017ae3c0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40008b2060), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400174dbe8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000b3e460), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40002877a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400174dc38)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1002 00:32:47.110559       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"691e6700-9f49-45d6-ac7c-11a3fa760c6b", ResourceVersion:"415", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63863425951, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001b54b20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001b54b40)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001b54b60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001b54b80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001b54ba0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40018ede40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b54bc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b54be0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001b54c20)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400199b5c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40019b0ae8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400051c2a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001742438)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40019b0b38)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I1002 00:32:47.162531       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I1002 00:32:47.362751       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1002 00:32:47.411848       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1002 00:32:47.411868       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1002 00:32:48.493187       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1002 00:32:48.509596       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-bhtjh"
	I1002 00:34:16.611978       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E1002 00:34:16.710884       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	E1002 00:34:16.711307       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [ad357c0209d969730c7b744d5c2fbad0a7c610d618ddddb9a2fc771ce023b605] <==
	E1002 00:36:19.447277       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1002 00:36:23.564433       1 request.go:655] Throttling request took 1.048294059s, request: GET:https://192.168.76.2:8443/api/v1?timeout=32s
	W1002 00:36:24.416026       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 00:36:49.949152       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1002 00:36:56.066479       1 request.go:655] Throttling request took 1.048299319s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1?timeout=32s
	W1002 00:36:56.917883       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 00:37:20.450934       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1002 00:37:28.568444       1 request.go:655] Throttling request took 1.048470409s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1002 00:37:29.427637       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 00:37:50.952755       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1002 00:38:01.078081       1 request.go:655] Throttling request took 1.047062219s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W1002 00:38:01.929588       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 00:38:21.454477       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1002 00:38:33.579975       1 request.go:655] Throttling request took 1.047901192s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1002 00:38:34.431383       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 00:38:51.956664       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1002 00:39:06.081935       1 request.go:655] Throttling request took 1.04821422s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1002 00:39:06.933256       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 00:39:22.458645       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1002 00:39:38.583666       1 request.go:655] Throttling request took 1.048250953s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1002 00:39:39.435496       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 00:39:52.960845       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1002 00:40:11.039438       1 request.go:655] Throttling request took 1.001877317s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W1002 00:40:11.937383       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 00:40:23.462676       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [7d0046a7ae8533f8fe5ca98c95ba71a7292736d0effab8db2b682672e751ab5f] <==
	I1002 00:32:48.008891       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1002 00:32:48.008973       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1002 00:32:48.088586       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1002 00:32:48.088674       1 server_others.go:185] Using iptables Proxier.
	I1002 00:32:48.088963       1 server.go:650] Version: v1.20.0
	I1002 00:32:48.094477       1 config.go:315] Starting service config controller
	I1002 00:32:48.094499       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1002 00:32:48.095256       1 config.go:224] Starting endpoint slice config controller
	I1002 00:32:48.095271       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1002 00:32:48.194652       1 shared_informer.go:247] Caches are synced for service config 
	I1002 00:32:48.197414       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [8ff3cc680606db52e7aec936aa5a3da89b1f05ec9c72362a02657eefc93740a6] <==
	I1002 00:35:02.623390       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1002 00:35:02.623458       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1002 00:35:02.670757       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1002 00:35:02.670848       1 server_others.go:185] Using iptables Proxier.
	I1002 00:35:02.671060       1 server.go:650] Version: v1.20.0
	I1002 00:35:02.671561       1 config.go:315] Starting service config controller
	I1002 00:35:02.671570       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1002 00:35:02.691697       1 config.go:224] Starting endpoint slice config controller
	I1002 00:35:02.691716       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1002 00:35:02.785900       1 shared_informer.go:247] Caches are synced for service config 
	I1002 00:35:02.795643       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [19bd9ef04d15a6dccf5970eea7159536699fa21c02ae6b578cb94fb5a6c7af6a] <==
	W1002 00:32:28.290514       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 00:32:28.290574       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 00:32:28.364000       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1002 00:32:28.364751       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 00:32:28.368485       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 00:32:28.368654       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1002 00:32:28.413239       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 00:32:28.413562       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 00:32:28.413675       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 00:32:28.413771       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 00:32:28.413846       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 00:32:28.413927       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 00:32:28.414034       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 00:32:28.414110       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 00:32:28.414187       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 00:32:28.414257       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 00:32:28.414410       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 00:32:28.414445       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 00:32:29.265312       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 00:32:29.265317       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 00:32:29.274268       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 00:32:29.360259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 00:32:29.412763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 00:32:29.425201       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1002 00:32:31.169186       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [87d3a4aa9e474d9d563b52794771a880f456688ecc28b5c9391254d7e388d601] <==
	I1002 00:34:49.399515       1 serving.go:331] Generated self-signed cert in-memory
	W1002 00:34:58.292301       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 00:34:58.292330       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 00:34:58.292339       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 00:34:58.292343       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 00:34:58.681529       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1002 00:34:58.683838       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 00:34:58.684019       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 00:34:58.684119       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1002 00:34:58.884274       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 02 00:38:59 old-k8s-version-920941 kubelet[657]: E1002 00:38:59.040242     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	Oct 02 00:39:04 old-k8s-version-920941 kubelet[657]: E1002 00:39:04.036603     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 00:39:11 old-k8s-version-920941 kubelet[657]: I1002 00:39:11.035901     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0
	Oct 02 00:39:11 old-k8s-version-920941 kubelet[657]: E1002 00:39:11.036751     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	Oct 02 00:39:17 old-k8s-version-920941 kubelet[657]: E1002 00:39:17.036726     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 00:39:26 old-k8s-version-920941 kubelet[657]: I1002 00:39:26.035801     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0
	Oct 02 00:39:26 old-k8s-version-920941 kubelet[657]: E1002 00:39:26.036140     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	Oct 02 00:39:28 old-k8s-version-920941 kubelet[657]: E1002 00:39:28.036705     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 00:39:37 old-k8s-version-920941 kubelet[657]: I1002 00:39:37.036135     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0
	Oct 02 00:39:37 old-k8s-version-920941 kubelet[657]: E1002 00:39:37.039880     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	Oct 02 00:39:42 old-k8s-version-920941 kubelet[657]: E1002 00:39:42.037320     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 00:39:48 old-k8s-version-920941 kubelet[657]: I1002 00:39:48.035896     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0
	Oct 02 00:39:48 old-k8s-version-920941 kubelet[657]: E1002 00:39:48.036281     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	Oct 02 00:39:55 old-k8s-version-920941 kubelet[657]: E1002 00:39:55.037598     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 00:40:02 old-k8s-version-920941 kubelet[657]: I1002 00:40:02.035948     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0
	Oct 02 00:40:02 old-k8s-version-920941 kubelet[657]: E1002 00:40:02.036402     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	Oct 02 00:40:08 old-k8s-version-920941 kubelet[657]: E1002 00:40:08.036742     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 00:40:15 old-k8s-version-920941 kubelet[657]: I1002 00:40:15.036961     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0
	Oct 02 00:40:15 old-k8s-version-920941 kubelet[657]: E1002 00:40:15.037929     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	Oct 02 00:40:20 old-k8s-version-920941 kubelet[657]: E1002 00:40:20.036677     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 00:40:27 old-k8s-version-920941 kubelet[657]: I1002 00:40:27.036048     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0
	Oct 02 00:40:27 old-k8s-version-920941 kubelet[657]: E1002 00:40:27.036953     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	Oct 02 00:40:31 old-k8s-version-920941 kubelet[657]: E1002 00:40:31.037322     657 pod_workers.go:191] Error syncing pod 760c9bc1-fd8e-42ac-9f29-2b1ebe602ede ("metrics-server-9975d5f86-49vwr_kube-system(760c9bc1-fd8e-42ac-9f29-2b1ebe602ede)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 00:40:39 old-k8s-version-920941 kubelet[657]: I1002 00:40:39.036669     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: e948e7eda020fba099d2e760d8e1201abedc6a045610f2704724d8729adc18c0
	Oct 02 00:40:39 old-k8s-version-920941 kubelet[657]: E1002 00:40:39.036998     657 pod_workers.go:191] Error syncing pod 0c776bbe-8a0a-422b-8d17-61f8ff516549 ("dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hlf6t_kubernetes-dashboard(0c776bbe-8a0a-422b-8d17-61f8ff516549)"
	
	
	==> kubernetes-dashboard [6d607cf4c97f9850c9226c3b461edd448574512ffe663325bf684f333c1bef50] <==
	2024/10/02 00:35:26 Using namespace: kubernetes-dashboard
	2024/10/02 00:35:26 Using in-cluster config to connect to apiserver
	2024/10/02 00:35:26 Using secret token for csrf signing
	2024/10/02 00:35:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/02 00:35:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/02 00:35:26 Successful initial request to the apiserver, version: v1.20.0
	2024/10/02 00:35:26 Generating JWE encryption key
	2024/10/02 00:35:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/02 00:35:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/02 00:35:27 Initializing JWE encryption key from synchronized object
	2024/10/02 00:35:27 Creating in-cluster Sidecar client
	2024/10/02 00:35:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/02 00:35:27 Serving insecurely on HTTP port: 9090
	2024/10/02 00:35:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/02 00:36:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/02 00:36:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/02 00:37:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/02 00:37:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/02 00:38:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/02 00:38:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/02 00:39:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/02 00:39:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/02 00:40:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/02 00:35:26 Starting overwatch
	
	
	==> storage-provisioner [287dac3a47db438435ee5f4915263ed26e3f6eae852bb618365b6490d95b4453] <==
	I1002 00:35:45.158406       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 00:35:45.200083       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 00:35:45.200149       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 00:36:02.672365       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 00:36:02.672582       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-920941_ffda5cb1-3f21-462b-b425-41ba70181056!
	I1002 00:36:02.672659       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e13019b2-a311-4fbc-aafb-05879fd8564d", APIVersion:"v1", ResourceVersion:"838", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-920941_ffda5cb1-3f21-462b-b425-41ba70181056 became leader
	I1002 00:36:02.773598       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-920941_ffda5cb1-3f21-462b-b425-41ba70181056!
	
	
	==> storage-provisioner [4a7d63ad64058ab8a816f95771113a6646e7f49235241d0b9c99ee631cdf7f38] <==
	I1002 00:35:02.137535       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 00:35:32.139856       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-920941 -n old-k8s-version-920941
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-920941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-49vwr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-920941 describe pod metrics-server-9975d5f86-49vwr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-920941 describe pod metrics-server-9975d5f86-49vwr: exit status 1 (132.655404ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-49vwr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-920941 describe pod metrics-server-9975d5f86-49vwr: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (372.14s)

                                                
                                    

Test pass (298/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.73
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 4.97
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 215.79
31 TestAddons/serial/GCPAuth/Namespaces 0.17
33 TestAddons/parallel/Registry 17.15
34 TestAddons/parallel/Ingress 19.75
35 TestAddons/parallel/InspektorGadget 11.85
36 TestAddons/parallel/MetricsServer 6.81
38 TestAddons/parallel/CSI 55.03
39 TestAddons/parallel/Headlamp 15.97
40 TestAddons/parallel/CloudSpanner 6.58
41 TestAddons/parallel/LocalPath 51.82
42 TestAddons/parallel/NvidiaDevicePlugin 5.9
43 TestAddons/parallel/Yakd 11.79
44 TestAddons/StoppedEnableDisable 12.26
45 TestCertOptions 38.84
46 TestCertExpiration 231.16
48 TestForceSystemdFlag 37.19
49 TestForceSystemdEnv 40.85
50 TestDockerEnvContainerd 48.36
55 TestErrorSpam/setup 29.7
56 TestErrorSpam/start 0.69
57 TestErrorSpam/status 1.04
58 TestErrorSpam/pause 1.69
59 TestErrorSpam/unpause 1.73
60 TestErrorSpam/stop 1.45
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 48.25
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 6.21
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.09
72 TestFunctional/serial/CacheCmd/cache/add_local 1.16
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.96
77 TestFunctional/serial/CacheCmd/cache/delete 0.1
78 TestFunctional/serial/MinikubeKubectlCmd 0.13
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 97.46
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.69
83 TestFunctional/serial/LogsFileCmd 1.72
84 TestFunctional/serial/InvalidService 4.31
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 11.07
88 TestFunctional/parallel/DryRun 0.42
89 TestFunctional/parallel/InternationalLanguage 0.19
90 TestFunctional/parallel/StatusCmd 1.09
94 TestFunctional/parallel/ServiceCmdConnect 10.57
95 TestFunctional/parallel/AddonsCmd 0.14
96 TestFunctional/parallel/PersistentVolumeClaim 24.11
98 TestFunctional/parallel/SSHCmd 0.66
99 TestFunctional/parallel/CpCmd 2.26
101 TestFunctional/parallel/FileSync 0.35
102 TestFunctional/parallel/CertSync 1.99
106 TestFunctional/parallel/NodeLabels 0.12
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
110 TestFunctional/parallel/License 0.27
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
124 TestFunctional/parallel/ProfileCmd/profile_list 0.41
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
126 TestFunctional/parallel/MountCmd/any-port 6.93
127 TestFunctional/parallel/ServiceCmd/List 0.51
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
130 TestFunctional/parallel/ServiceCmd/Format 0.37
131 TestFunctional/parallel/ServiceCmd/URL 0.37
132 TestFunctional/parallel/MountCmd/specific-port 2.32
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.25
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.21
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
140 TestFunctional/parallel/ImageCommands/ImageBuild 4
141 TestFunctional/parallel/ImageCommands/Setup 0.74
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.08
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 115.28
159 TestMultiControlPlane/serial/DeployApp 33.42
160 TestMultiControlPlane/serial/PingHostFromPods 1.5
161 TestMultiControlPlane/serial/AddWorkerNode 21.95
162 TestMultiControlPlane/serial/NodeLabels 0.12
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.98
164 TestMultiControlPlane/serial/CopyFile 18.29
165 TestMultiControlPlane/serial/StopSecondaryNode 12.82
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
167 TestMultiControlPlane/serial/RestartSecondaryNode 18.65
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.98
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 130.04
170 TestMultiControlPlane/serial/DeleteSecondaryNode 9.62
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
172 TestMultiControlPlane/serial/StopCluster 36
173 TestMultiControlPlane/serial/RestartCluster 66.93
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
175 TestMultiControlPlane/serial/AddSecondaryNode 43.32
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.99
180 TestJSONOutput/start/Command 48.11
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.74
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.63
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.85
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.21
205 TestKicCustomNetwork/create_custom_network 36.66
206 TestKicCustomNetwork/use_default_bridge_network 32.87
207 TestKicExistingNetwork 31.67
208 TestKicCustomSubnet 33.15
209 TestKicStaticIP 33.74
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 63.91
214 TestMountStart/serial/StartWithMountFirst 6.82
215 TestMountStart/serial/VerifyMountFirst 0.24
216 TestMountStart/serial/StartWithMountSecond 5.93
217 TestMountStart/serial/VerifyMountSecond 0.25
218 TestMountStart/serial/DeleteFirst 1.6
219 TestMountStart/serial/VerifyMountPostDelete 0.24
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 7.92
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 66.25
226 TestMultiNode/serial/DeployApp2Nodes 56.36
227 TestMultiNode/serial/PingHostFrom2Pods 0.95
228 TestMultiNode/serial/AddNode 15.64
229 TestMultiNode/serial/MultiNodeLabels 0.1
230 TestMultiNode/serial/ProfileList 0.65
231 TestMultiNode/serial/CopyFile 9.69
232 TestMultiNode/serial/StopNode 2.2
233 TestMultiNode/serial/StartAfterStop 9.48
234 TestMultiNode/serial/RestartKeepsNodes 122.75
235 TestMultiNode/serial/DeleteNode 5.41
236 TestMultiNode/serial/StopMultiNode 24.02
237 TestMultiNode/serial/RestartMultiNode 56.69
238 TestMultiNode/serial/ValidateNameConflict 32.91
243 TestPreload 113.56
245 TestScheduledStopUnix 104.34
248 TestInsufficientStorage 10.26
249 TestRunningBinaryUpgrade 74.78
251 TestKubernetesUpgrade 352.38
252 TestMissingContainerUpgrade 193.39
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
255 TestNoKubernetes/serial/StartWithK8s 36.31
256 TestNoKubernetes/serial/StartWithStopK8s 19.41
257 TestNoKubernetes/serial/Start 6.37
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
259 TestNoKubernetes/serial/ProfileList 0.95
260 TestNoKubernetes/serial/Stop 1.21
261 TestNoKubernetes/serial/StartNoArgs 7.26
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
263 TestStoppedBinaryUpgrade/Setup 0.79
264 TestStoppedBinaryUpgrade/Upgrade 103.01
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
274 TestPause/serial/Start 97.66
282 TestNetworkPlugins/group/false 3.64
286 TestPause/serial/SecondStartNoReconfiguration 8.83
287 TestPause/serial/Pause 0.99
288 TestPause/serial/VerifyStatus 0.36
289 TestPause/serial/Unpause 0.8
290 TestPause/serial/PauseAgain 1.03
291 TestPause/serial/DeletePaused 2.86
292 TestPause/serial/VerifyDeletedResources 0.45
294 TestStartStop/group/old-k8s-version/serial/FirstStart 140.54
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.61
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.16
297 TestStartStop/group/old-k8s-version/serial/Stop 12.04
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
301 TestStartStop/group/no-preload/serial/FirstStart 70.08
302 TestStartStop/group/no-preload/serial/DeployApp 9.37
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
304 TestStartStop/group/no-preload/serial/Stop 12.13
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/no-preload/serial/SecondStart 301.82
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
310 TestStartStop/group/old-k8s-version/serial/Pause 3.2
312 TestStartStop/group/embed-certs/serial/FirstStart 83.79
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
316 TestStartStop/group/no-preload/serial/Pause 3.83
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.48
319 TestStartStop/group/embed-certs/serial/DeployApp 8.32
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
321 TestStartStop/group/embed-certs/serial/Stop 12.07
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
323 TestStartStop/group/embed-certs/serial/SecondStart 268
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.54
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.58
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.14
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.03
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
332 TestStartStop/group/embed-certs/serial/Pause 3.36
334 TestStartStop/group/newest-cni/serial/FirstStart 39.78
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.14
339 TestNetworkPlugins/group/auto/Start 57.79
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.27
342 TestStartStop/group/newest-cni/serial/Stop 1.32
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
344 TestStartStop/group/newest-cni/serial/SecondStart 21.21
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
348 TestStartStop/group/newest-cni/serial/Pause 3.27
349 TestNetworkPlugins/group/kindnet/Start 91.41
350 TestNetworkPlugins/group/auto/KubeletFlags 0.34
351 TestNetworkPlugins/group/auto/NetCatPod 9.39
352 TestNetworkPlugins/group/auto/DNS 0.22
353 TestNetworkPlugins/group/auto/Localhost 0.21
354 TestNetworkPlugins/group/auto/HairPin 0.2
355 TestNetworkPlugins/group/calico/Start 58.17
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
358 TestNetworkPlugins/group/kindnet/NetCatPod 9.27
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/DNS 0.21
361 TestNetworkPlugins/group/kindnet/Localhost 0.17
362 TestNetworkPlugins/group/kindnet/HairPin 0.17
363 TestNetworkPlugins/group/calico/KubeletFlags 0.31
364 TestNetworkPlugins/group/calico/NetCatPod 10.28
365 TestNetworkPlugins/group/calico/DNS 0.25
366 TestNetworkPlugins/group/calico/Localhost 0.31
367 TestNetworkPlugins/group/calico/HairPin 0.24
368 TestNetworkPlugins/group/custom-flannel/Start 59.97
369 TestNetworkPlugins/group/enable-default-cni/Start 49.03
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.28
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.27
374 TestNetworkPlugins/group/custom-flannel/DNS 0.2
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
380 TestNetworkPlugins/group/flannel/Start 65.4
381 TestNetworkPlugins/group/bridge/Start 75.99
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
384 TestNetworkPlugins/group/flannel/NetCatPod 9.25
385 TestNetworkPlugins/group/flannel/DNS 0.18
386 TestNetworkPlugins/group/flannel/Localhost 0.17
387 TestNetworkPlugins/group/flannel/HairPin 0.16
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 12.26
390 TestNetworkPlugins/group/bridge/DNS 0.23
391 TestNetworkPlugins/group/bridge/Localhost 0.19
392 TestNetworkPlugins/group/bridge/HairPin 0.22
x
+
TestDownloadOnly/v1.20.0/json-events (6.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-994430 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-994430 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.732959052s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1001 23:43:05.027373 1750505 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1001 23:43:05.027453 1750505 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-994430
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-994430: exit status 85 (71.457231ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-994430 | jenkins | v1.34.0 | 01 Oct 24 23:42 UTC |          |
	|         | -p download-only-994430        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:42:58
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:42:58.342841 1750510 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:42:58.343038 1750510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:42:58.343064 1750510 out.go:358] Setting ErrFile to fd 2...
	I1001 23:42:58.343083 1750510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:42:58.343342 1750510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	W1001 23:42:58.343508 1750510 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19740-1745120/.minikube/config/config.json: open /home/jenkins/minikube-integration/19740-1745120/.minikube/config/config.json: no such file or directory
	I1001 23:42:58.343950 1750510 out.go:352] Setting JSON to true
	I1001 23:42:58.344871 1750510 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26726,"bootTime":1727799453,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 23:42:58.344968 1750510 start.go:139] virtualization:  
	I1001 23:42:58.347944 1750510 out.go:97] [download-only-994430] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1001 23:42:58.348129 1750510 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 23:42:58.348174 1750510 notify.go:220] Checking for updates...
	I1001 23:42:58.350192 1750510 out.go:169] MINIKUBE_LOCATION=19740
	I1001 23:42:58.352358 1750510 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:42:58.354052 1750510 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1001 23:42:58.356067 1750510 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	I1001 23:42:58.357859 1750510 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1001 23:42:58.361702 1750510 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 23:42:58.361994 1750510 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:42:58.390063 1750510 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:42:58.390172 1750510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:42:58.435894 1750510 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-01 23:42:58.426711886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:42:58.436005 1750510 docker.go:318] overlay module found
	I1001 23:42:58.437724 1750510 out.go:97] Using the docker driver based on user configuration
	I1001 23:42:58.437748 1750510 start.go:297] selected driver: docker
	I1001 23:42:58.437754 1750510 start.go:901] validating driver "docker" against <nil>
	I1001 23:42:58.437861 1750510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:42:58.493474 1750510 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-01 23:42:58.484670796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:42:58.493674 1750510 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:42:58.493939 1750510 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1001 23:42:58.494100 1750510 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 23:42:58.496577 1750510 out.go:169] Using Docker driver with root privileges
	I1001 23:42:58.498470 1750510 cni.go:84] Creating CNI manager for ""
	I1001 23:42:58.498534 1750510 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 23:42:58.498548 1750510 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 23:42:58.498618 1750510 start.go:340] cluster config:
	{Name:download-only-994430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-994430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:42:58.500610 1750510 out.go:97] Starting "download-only-994430" primary control-plane node in "download-only-994430" cluster
	I1001 23:42:58.500628 1750510 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1001 23:42:58.502553 1750510 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1001 23:42:58.502577 1750510 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1001 23:42:58.502712 1750510 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 23:42:58.517943 1750510 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 23:42:58.518116 1750510 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 23:42:58.518222 1750510 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 23:42:58.637333 1750510 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1001 23:42:58.637358 1750510 cache.go:56] Caching tarball of preloaded images
	I1001 23:42:58.637515 1750510 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1001 23:42:58.640438 1750510 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1001 23:42:58.640489 1750510 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1001 23:42:58.723760 1750510 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1001 23:43:03.228597 1750510 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1001 23:43:03.228695 1750510 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-994430 host does not exist
	  To start a cluster, run: "minikube start -p download-only-994430"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-994430
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-271061 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-271061 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.973523128s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1001 23:43:10.395108 1750505 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1001 23:43:10.395150 1750505 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-1745120/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-271061
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-271061: exit status 85 (67.633654ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-994430 | jenkins | v1.34.0 | 01 Oct 24 23:42 UTC |                     |
	|         | -p download-only-994430        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC | 01 Oct 24 23:43 UTC |
	| delete  | -p download-only-994430        | download-only-994430 | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC | 01 Oct 24 23:43 UTC |
	| start   | -o=json --download-only        | download-only-271061 | jenkins | v1.34.0 | 01 Oct 24 23:43 UTC |                     |
	|         | -p download-only-271061        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:43:05
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:43:05.463330 1750714 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:43:05.463448 1750714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:43:05.463458 1750714 out.go:358] Setting ErrFile to fd 2...
	I1001 23:43:05.463463 1750714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:43:05.463695 1750714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	I1001 23:43:05.464075 1750714 out.go:352] Setting JSON to true
	I1001 23:43:05.464955 1750714 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26733,"bootTime":1727799453,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 23:43:05.465034 1750714 start.go:139] virtualization:  
	I1001 23:43:05.467887 1750714 out.go:97] [download-only-271061] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 23:43:05.468049 1750714 notify.go:220] Checking for updates...
	I1001 23:43:05.470052 1750714 out.go:169] MINIKUBE_LOCATION=19740
	I1001 23:43:05.472359 1750714 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:43:05.474101 1750714 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1001 23:43:05.476542 1750714 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	I1001 23:43:05.478518 1750714 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1001 23:43:05.482247 1750714 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 23:43:05.482528 1750714 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:43:05.516543 1750714 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:43:05.516645 1750714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:43:05.561994 1750714 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-01 23:43:05.552218054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:43:05.562109 1750714 docker.go:318] overlay module found
	I1001 23:43:05.564580 1750714 out.go:97] Using the docker driver based on user configuration
	I1001 23:43:05.564607 1750714 start.go:297] selected driver: docker
	I1001 23:43:05.564625 1750714 start.go:901] validating driver "docker" against <nil>
	I1001 23:43:05.564732 1750714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:43:05.611548 1750714 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-01 23:43:05.602584751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:43:05.611779 1750714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:43:05.612071 1750714 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1001 23:43:05.612229 1750714 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 23:43:05.614578 1750714 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-271061 host does not exist
	  To start a cluster, run: "minikube start -p download-only-271061"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-271061
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I1001 23:43:11.589034 1750505 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-632362 --alsologtostderr --binary-mirror http://127.0.0.1:39787 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-632362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-632362
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:932: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-515343
addons_test.go:932: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-515343: exit status 85 (82.344072ms)

                                                
                                                
-- stdout --
	* Profile "addons-515343" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-515343"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:943: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-515343
addons_test.go:943: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-515343: exit status 85 (91.849266ms)

                                                
                                                
-- stdout --
	* Profile "addons-515343" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-515343"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (215.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-515343 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-515343 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m35.785557794s)
--- PASS: TestAddons/Setup (215.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-515343 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-515343 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.844337ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-r4nd7" [af28ec25-e4aa-4c5c-a962-bdbc9c202d33] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004406919s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ghb94" [c982414a-f5f3-4738-a511-fe432f305818] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004006323s
addons_test.go:331: (dbg) Run:  kubectl --context addons-515343 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-515343 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-515343 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.14424942s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 ip
2024/10/01 23:50:54 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.15s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-515343 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-515343 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-515343 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a737574e-00e7-4a85-93dc-45a98eb8ad4b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a737574e-00e7-4a85-93dc-45a98eb8ad4b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002943722s
I1001 23:52:15.586823 1750505 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-515343 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-515343 addons disable ingress-dns --alsologtostderr -v=1: (1.121002457s)
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable ingress --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-515343 addons disable ingress --alsologtostderr -v=1: (7.760280651s)
--- PASS: TestAddons/parallel/Ingress (19.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dtsn2" [f13d19bf-8696-409f-bcd3-6205720989fe] Running
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003777843s
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-515343 addons disable inspektor-gadget --alsologtostderr -v=1: (5.848071929s)
--- PASS: TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.508062ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-42hls" [c4a49c1b-2064-4012-8dbe-0c11d66e402d] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004308423s
addons_test.go:402: (dbg) Run:  kubectl --context addons-515343 top pods -n kube-system
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1001 23:51:18.376095 1750505 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1001 23:51:18.381781 1750505 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1001 23:51:18.381810 1750505 kapi.go:107] duration metric: took 7.862646ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.87191ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-515343 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-515343 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c933e96f-0611-4831-bd8d-f9cc44b8421a] Pending
helpers_test.go:344: "task-pv-pod" [c933e96f-0611-4831-bd8d-f9cc44b8421a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c933e96f-0611-4831-bd8d-f9cc44b8421a] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.007635244s
addons_test.go:511: (dbg) Run:  kubectl --context addons-515343 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-515343 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-515343 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-515343 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-515343 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-515343 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-515343 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [89416fac-2175-4606-b810-218dc41cbc64] Pending
helpers_test.go:344: "task-pv-pod-restore" [89416fac-2175-4606-b810-218dc41cbc64] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [89416fac-2175-4606-b810-218dc41cbc64] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003926177s
addons_test.go:553: (dbg) Run:  kubectl --context addons-515343 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-515343 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-515343 delete volumesnapshot new-snapshot-demo
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-515343 addons disable volumesnapshots --alsologtostderr -v=1: (1.284436117s)
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-515343 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.848899033s)
--- PASS: TestAddons/parallel/CSI (55.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:741: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-515343 --alsologtostderr -v=1
addons_test.go:741: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-515343 --alsologtostderr -v=1: (1.239471005s)
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-72p2b" [98ad70c0-63df-416d-b3f0-1d524f8a8972] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-72p2b" [98ad70c0-63df-416d-b3f0-1d524f8a8972] Running
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003868894s
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable headlamp --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-515343 addons disable headlamp --alsologtostderr -v=1: (5.725134702s)
--- PASS: TestAddons/parallel/Headlamp (15.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-wnkhg" [d428eb41-3545-437f-8f0f-cb3079706c8a] Running
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00432862s
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:881: (dbg) Run:  kubectl --context addons-515343 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:887: (dbg) Run:  kubectl --context addons-515343 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:891: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-515343 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6b44b9a9-eb05-4afb-a172-7b5875fdc785] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6b44b9a9-eb05-4afb-a172-7b5875fdc785] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6b44b9a9-eb05-4afb-a172-7b5875fdc785] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003735223s
addons_test.go:899: (dbg) Run:  kubectl --context addons-515343 get pvc test-pvc -o=json
addons_test.go:908: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 ssh "cat /opt/local-path-provisioner/pvc-6b5cfd85-632c-42e2-a6ce-70df593be27d_default_test-pvc/file1"
addons_test.go:920: (dbg) Run:  kubectl --context addons-515343 delete pod test-local-path
addons_test.go:924: (dbg) Run:  kubectl --context addons-515343 delete pvc test-pvc
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-515343 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.307109042s)
--- PASS: TestAddons/parallel/LocalPath (51.82s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-r5vkz" [7e24d387-21f7-488f-b9f1-64ae0a88f60c] Running
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004610583s
addons_test.go:959: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-515343
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.90s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-v9nc2" [91f4f6d5-abaa-4d98-b753-f1a88a9fe228] Running
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003999404s
addons_test.go:971: (dbg) Run:  out/minikube-linux-arm64 -p addons-515343 addons disable yakd --alsologtostderr -v=1
addons_test.go:971: (dbg) Done: out/minikube-linux-arm64 -p addons-515343 addons disable yakd --alsologtostderr -v=1: (5.784225121s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-515343
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-515343: (11.997476981s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-515343
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-515343
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-515343
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (38.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-043656 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-043656 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.210538376s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-043656 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-043656 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-043656 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-043656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-043656
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-043656: (1.960138409s)
--- PASS: TestCertOptions (38.84s)

                                                
                                    
x
+
TestCertExpiration (231.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-603955 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-603955 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.217464105s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-603955 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-603955 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.590751468s)
helpers_test.go:175: Cleaning up "cert-expiration-603955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-603955
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-603955: (2.352293708s)
--- PASS: TestCertExpiration (231.16s)

                                                
                                    
x
+
TestForceSystemdFlag (37.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-305977 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1002 00:29:51.127023 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-305977 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.881337382s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-305977 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-305977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-305977
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-305977: (2.025150006s)
--- PASS: TestForceSystemdFlag (37.19s)

                                                
                                    
x
+
TestForceSystemdEnv (40.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-378598 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-378598 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.200854372s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-378598 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-378598" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-378598
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-378598: (2.257072655s)
--- PASS: TestForceSystemdEnv (40.85s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.36s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-321323 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-321323 --driver=docker  --container-runtime=containerd: (32.957770167s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-321323"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-74eqMJDMtyjj/agent.1772951" SSH_AGENT_PID="1772952" DOCKER_HOST=ssh://docker@127.0.0.1:34669 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-74eqMJDMtyjj/agent.1772951" SSH_AGENT_PID="1772952" DOCKER_HOST=ssh://docker@127.0.0.1:34669 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-74eqMJDMtyjj/agent.1772951" SSH_AGENT_PID="1772952" DOCKER_HOST=ssh://docker@127.0.0.1:34669 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.157174101s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-74eqMJDMtyjj/agent.1772951" SSH_AGENT_PID="1772952" DOCKER_HOST=ssh://docker@127.0.0.1:34669 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-321323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-321323
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-321323: (1.921947208s)
--- PASS: TestDockerEnvContainerd (48.36s)

                                                
                                    
x
+
TestErrorSpam/setup (29.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-146985 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-146985 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-146985 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-146985 --driver=docker  --container-runtime=containerd: (29.697684674s)
--- PASS: TestErrorSpam/setup (29.70s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 stop: (1.257676622s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-146985 --log_dir /tmp/nospam-146985 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19740-1745120/.minikube/files/etc/test/nested/copy/1750505/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-126185 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-126185 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.244115087s)
--- PASS: TestFunctional/serial/StartWithProxy (48.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1001 23:55:01.233377 1750505 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-126185 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-126185 --alsologtostderr -v=8: (6.209416554s)
functional_test.go:663: soft start took 6.210857465s for "functional-126185" cluster.
I1001 23:55:07.443164 1750505 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-126185 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-126185 cache add registry.k8s.io/pause:3.1: (1.533559062s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-126185 cache add registry.k8s.io/pause:3.3: (1.347450174s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-126185 cache add registry.k8s.io/pause:latest: (1.20775993s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-126185 /tmp/TestFunctionalserialCacheCmdcacheadd_local2482738328/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 cache add minikube-local-cache-test:functional-126185
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 cache delete minikube-local-cache-test:functional-126185
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-126185
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-126185 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (284.398605ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-126185 cache reload: (1.057405608s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 kubectl -- --context functional-126185 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-126185 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (97.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-126185 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1001 23:56:48.057725 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:56:48.064224 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:56:48.075622 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:56:48.097065 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:56:48.138418 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:56:48.219928 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:56:48.381565 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:56:48.703223 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:56:49.345219 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:56:50.626623 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-126185 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m37.462500492s)
functional_test.go:761: restart took 1m37.462996087s for "functional-126185" cluster.
I1001 23:56:53.047057 1750505 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (97.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-126185 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 logs
E1001 23:56:53.188113 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-126185 logs: (1.684787271s)
--- PASS: TestFunctional/serial/LogsCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 logs --file /tmp/TestFunctionalserialLogsFileCmd1792064181/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-126185 logs --file /tmp/TestFunctionalserialLogsFileCmd1792064181/001/logs.txt: (1.719887532s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-126185 apply -f testdata/invalidsvc.yaml
E1001 23:56:58.310201 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-126185
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-126185: exit status 115 (480.194278ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32691 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-126185 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-126185 config get cpus: exit status 14 (66.372853ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-126185 config get cpus: exit status 14 (67.084772ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-126185 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-126185 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1788138: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-126185 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-126185 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (175.697954ms)

                                                
                                                
-- stdout --
	* [functional-126185] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:57:33.637180 1787539 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:57:33.637334 1787539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:57:33.637345 1787539 out.go:358] Setting ErrFile to fd 2...
	I1001 23:57:33.637352 1787539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:57:33.637598 1787539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	I1001 23:57:33.637936 1787539 out.go:352] Setting JSON to false
	I1001 23:57:33.638971 1787539 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":27601,"bootTime":1727799453,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 23:57:33.639056 1787539 start.go:139] virtualization:  
	I1001 23:57:33.642221 1787539 out.go:177] * [functional-126185] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 23:57:33.645034 1787539 notify.go:220] Checking for updates...
	I1001 23:57:33.645945 1787539 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:57:33.648617 1787539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:57:33.652009 1787539 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1001 23:57:33.655431 1787539 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	I1001 23:57:33.657666 1787539 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 23:57:33.660273 1787539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:57:33.663184 1787539 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 23:57:33.663718 1787539 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:57:33.685606 1787539 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:57:33.685731 1787539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:57:33.745140 1787539 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-10-01 23:57:33.735614424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:57:33.745249 1787539 docker.go:318] overlay module found
	I1001 23:57:33.748986 1787539 out.go:177] * Using the docker driver based on existing profile
	I1001 23:57:33.752048 1787539 start.go:297] selected driver: docker
	I1001 23:57:33.752065 1787539 start.go:901] validating driver "docker" against &{Name:functional-126185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-126185 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:57:33.752180 1787539 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:57:33.755412 1787539 out.go:201] 
	W1001 23:57:33.758562 1787539 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1001 23:57:33.761259 1787539 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-126185 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-126185 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-126185 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (191.263086ms)

                                                
                                                
-- stdout --
	* [functional-126185] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:57:33.453833 1787492 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:57:33.453982 1787492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:57:33.453994 1787492 out.go:358] Setting ErrFile to fd 2...
	I1001 23:57:33.453999 1787492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:57:33.455354 1787492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	I1001 23:57:33.455779 1787492 out.go:352] Setting JSON to false
	I1001 23:57:33.456820 1787492 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":27601,"bootTime":1727799453,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 23:57:33.456892 1787492 start.go:139] virtualization:  
	I1001 23:57:33.460278 1787492 out.go:177] * [functional-126185] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1001 23:57:33.462194 1787492 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:57:33.462294 1787492 notify.go:220] Checking for updates...
	I1001 23:57:33.466462 1787492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:57:33.468754 1787492 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1001 23:57:33.470977 1787492 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	I1001 23:57:33.473070 1787492 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 23:57:33.475068 1787492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:57:33.477929 1787492 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 23:57:33.478537 1787492 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:57:33.513588 1787492 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 23:57:33.513707 1787492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 23:57:33.568198 1787492 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-10-01 23:57:33.558527168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 23:57:33.568316 1787492 docker.go:318] overlay module found
	I1001 23:57:33.572749 1787492 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1001 23:57:33.575343 1787492 start.go:297] selected driver: docker
	I1001 23:57:33.575362 1787492 start.go:901] validating driver "docker" against &{Name:functional-126185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-126185 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:57:33.575532 1787492 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:57:33.579032 1787492 out.go:201] 
	W1001 23:57:33.581869 1787492 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1001 23:57:33.584615 1787492 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-126185 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-126185 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-spcfg" [b57ab95d-e987-4911-9403-c6472cfadf69] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-spcfg" [b57ab95d-e987-4911-9403-c6472cfadf69] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003258755s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32634
functional_test.go:1675: http://192.168.49.2:32634: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-spcfg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32634
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [42b645a5-1196-4b6d-947e-e25c589f2bf3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014030697s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-126185 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-126185 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-126185 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-126185 apply -f testdata/storage-provisioner/pod.yaml
E1001 23:57:08.553330 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c3efb0ba-1159-4f89-bd21-2855c5362d86] Pending
helpers_test.go:344: "sp-pod" [c3efb0ba-1159-4f89-bd21-2855c5362d86] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c3efb0ba-1159-4f89-bd21-2855c5362d86] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003924761s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-126185 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-126185 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-126185 delete -f testdata/storage-provisioner/pod.yaml: (1.156909994s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-126185 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [93a153bf-f027-4c28-a5a8-8c51501cbb27] Pending
helpers_test.go:344: "sp-pod" [93a153bf-f027-4c28-a5a8-8c51501cbb27] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003754528s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-126185 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.11s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh -n functional-126185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 cp functional-126185:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd284601059/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh -n functional-126185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh -n functional-126185 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1750505/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "sudo cat /etc/test/nested/copy/1750505/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1750505.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "sudo cat /etc/ssl/certs/1750505.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1750505.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "sudo cat /usr/share/ca-certificates/1750505.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/17505052.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "sudo cat /etc/ssl/certs/17505052.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/17505052.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "sudo cat /usr/share/ca-certificates/17505052.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-126185 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-126185 ssh "sudo systemctl is-active docker": exit status 1 (288.503715ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-126185 ssh "sudo systemctl is-active crio": exit status 1 (328.941529ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-126185 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-126185 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-126185 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1785136: os: process already finished
helpers_test.go:502: unable to terminate pid 1784957: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-126185 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-126185 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-126185 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f5db2dcf-de0a-4f25-aec6-a34babcba829] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f5db2dcf-de0a-4f25-aec6-a34babcba829] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004179071s
I1001 23:57:12.040013 1750505 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-126185 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.145.154 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-126185 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-126185 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-126185 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-q8zds" [01731c23-7a4a-4cae-9645-0eac1cd5a112] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-q8zds" [01731c23-7a4a-4cae-9645-0eac1cd5a112] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.016387622s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "354.937233ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "52.053734ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "358.965591ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "52.071489ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-126185 /tmp/TestFunctionalparallelMountCmdany-port82481171/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727827048456915983" to /tmp/TestFunctionalparallelMountCmdany-port82481171/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727827048456915983" to /tmp/TestFunctionalparallelMountCmdany-port82481171/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727827048456915983" to /tmp/TestFunctionalparallelMountCmdany-port82481171/001/test-1727827048456915983
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-126185 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.0366ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 23:57:28.758804 1750505 retry.go:31] will retry after 458.632787ms: exit status 1
E1001 23:57:29.035428 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  1 23:57 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  1 23:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  1 23:57 test-1727827048456915983
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh cat /mount-9p/test-1727827048456915983
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-126185 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [285cf072-5d80-47c9-b53e-6e0e33ae4e5c] Pending
helpers_test.go:344: "busybox-mount" [285cf072-5d80-47c9-b53e-6e0e33ae4e5c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [285cf072-5d80-47c9-b53e-6e0e33ae4e5c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [285cf072-5d80-47c9-b53e-6e0e33ae4e5c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003486771s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-126185 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-126185 /tmp/TestFunctionalparallelMountCmdany-port82481171/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 service list -o json
functional_test.go:1494: Took "480.123492ms" to run "out/minikube-linux-arm64 -p functional-126185 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31162
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31162
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-126185 /tmp/TestFunctionalparallelMountCmdspecific-port3580306172/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-126185 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (559.08522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 23:57:35.950412 1750505 retry.go:31] will retry after 699.100975ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-126185 /tmp/TestFunctionalparallelMountCmdspecific-port3580306172/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-126185 ssh "sudo umount -f /mount-9p": exit status 1 (285.337722ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-126185 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-126185 /tmp/TestFunctionalparallelMountCmdspecific-port3580306172/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-126185 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117419980/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-126185 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117419980/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-126185 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117419980/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-126185 ssh "findmnt -T" /mount1: exit status 1 (788.611246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 23:57:38.504998 1750505 retry.go:31] will retry after 363.864776ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-126185 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-126185 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117419980/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-126185 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117419980/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-126185 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117419980/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-126185 version -o=json --components: (1.213871801s)
--- PASS: TestFunctional/parallel/Version/components (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-126185 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-126185
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-126185
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-126185 image ls --format short --alsologtostderr:
I1001 23:57:48.242997 1790383 out.go:345] Setting OutFile to fd 1 ...
I1001 23:57:48.243140 1790383 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:57:48.243153 1790383 out.go:358] Setting ErrFile to fd 2...
I1001 23:57:48.243182 1790383 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:57:48.243458 1790383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
I1001 23:57:48.244955 1790383 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 23:57:48.245142 1790383 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 23:57:48.246321 1790383 cli_runner.go:164] Run: docker container inspect functional-126185 --format={{.State.Status}}
I1001 23:57:48.262908 1790383 ssh_runner.go:195] Run: systemctl --version
I1001 23:57:48.262965 1790383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-126185
I1001 23:57:48.280070 1790383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34679 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/functional-126185/id_rsa Username:docker}
I1001 23:57:48.377602 1790383 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-126185 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-126185  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| docker.io/library/nginx                     | latest             | sha256:6e8672 | 67.7MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/library/minikube-local-cache-test | functional-126185  | sha256:c07b95 | 989B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-126185 image ls --format table --alsologtostderr:
I1001 23:57:49.171831 1790611 out.go:345] Setting OutFile to fd 1 ...
I1001 23:57:49.172184 1790611 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:57:49.172194 1790611 out.go:358] Setting ErrFile to fd 2...
I1001 23:57:49.172200 1790611 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:57:49.172516 1790611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
I1001 23:57:49.173176 1790611 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 23:57:49.173292 1790611 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 23:57:49.173809 1790611 cli_runner.go:164] Run: docker container inspect functional-126185 --format={{.State.Status}}
I1001 23:57:49.191712 1790611 ssh_runner.go:195] Run: systemctl --version
I1001 23:57:49.191764 1790611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-126185
I1001 23:57:49.222136 1790611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34679 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/functional-126185/id_rsa Username:docker}
I1001 23:57:49.321143 1790611 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-126185 image ls --format json --alsologtostderr:
[{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2","repoDigests":["docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd0
15bb"],"repoTags":["docker.io/library/nginx:latest"],"size":"67693717"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c
23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-126185"],"size":"2173567"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a0838
71d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.
io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:c07b9501159f025d1f82ca5dead6675d0bd348112ac253964a4e4d668dc149b6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-126185"],"size":"989"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-126185 image ls --format json --alsologtostderr:
I1001 23:57:48.892169 1790532 out.go:345] Setting OutFile to fd 1 ...
I1001 23:57:48.892429 1790532 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:57:48.892490 1790532 out.go:358] Setting ErrFile to fd 2...
I1001 23:57:48.892512 1790532 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:57:48.893363 1790532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
I1001 23:57:48.894612 1790532 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 23:57:48.894863 1790532 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 23:57:48.895494 1790532 cli_runner.go:164] Run: docker container inspect functional-126185 --format={{.State.Status}}
I1001 23:57:48.915218 1790532 ssh_runner.go:195] Run: systemctl --version
I1001 23:57:48.915296 1790532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-126185
I1001 23:57:48.938634 1790532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34679 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/functional-126185/id_rsa Username:docker}
I1001 23:57:49.033091 1790532 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-126185 image ls --format yaml --alsologtostderr:
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-126185
size: "2173567"
- id: sha256:c07b9501159f025d1f82ca5dead6675d0bd348112ac253964a4e4d668dc149b6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-126185
size: "989"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2
repoDigests:
- docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb
repoTags:
- docker.io/library/nginx:latest
size: "67693717"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-126185 image ls --format yaml --alsologtostderr:
I1001 23:57:48.473084 1790421 out.go:345] Setting OutFile to fd 1 ...
I1001 23:57:48.473231 1790421 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:57:48.473237 1790421 out.go:358] Setting ErrFile to fd 2...
I1001 23:57:48.473241 1790421 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:57:48.473508 1790421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
I1001 23:57:48.474194 1790421 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 23:57:48.474372 1790421 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 23:57:48.474904 1790421 cli_runner.go:164] Run: docker container inspect functional-126185 --format={{.State.Status}}
I1001 23:57:48.495089 1790421 ssh_runner.go:195] Run: systemctl --version
I1001 23:57:48.495149 1790421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-126185
I1001 23:57:48.514996 1790421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34679 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/functional-126185/id_rsa Username:docker}
I1001 23:57:48.613629 1790421 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-126185 ssh pgrep buildkitd: exit status 1 (317.795238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image build -t localhost/my-image:functional-126185 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-126185 image build -t localhost/my-image:functional-126185 testdata/build --alsologtostderr: (3.452292586s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-126185 image build -t localhost/my-image:functional-126185 testdata/build --alsologtostderr:
I1001 23:57:49.057315 1790585 out.go:345] Setting OutFile to fd 1 ...
I1001 23:57:49.059033 1790585 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:57:49.059063 1790585 out.go:358] Setting ErrFile to fd 2...
I1001 23:57:49.059071 1790585 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:57:49.059387 1790585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
I1001 23:57:49.060241 1790585 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 23:57:49.061791 1790585 config.go:182] Loaded profile config "functional-126185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 23:57:49.062313 1790585 cli_runner.go:164] Run: docker container inspect functional-126185 --format={{.State.Status}}
I1001 23:57:49.083563 1790585 ssh_runner.go:195] Run: systemctl --version
I1001 23:57:49.083620 1790585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-126185
I1001 23:57:49.107628 1790585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34679 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/functional-126185/id_rsa Username:docker}
I1001 23:57:49.216988 1790585 build_images.go:161] Building image from path: /tmp/build.1001819658.tar
I1001 23:57:49.217140 1790585 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1001 23:57:49.231170 1790585 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1001819658.tar
I1001 23:57:49.235049 1790585 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1001819658.tar: stat -c "%s %y" /var/lib/minikube/build/build.1001819658.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1001819658.tar': No such file or directory
I1001 23:57:49.235079 1790585 ssh_runner.go:362] scp /tmp/build.1001819658.tar --> /var/lib/minikube/build/build.1001819658.tar (3072 bytes)
I1001 23:57:49.262595 1790585 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1001819658
I1001 23:57:49.274688 1790585 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1001819658 -xf /var/lib/minikube/build/build.1001819658.tar
I1001 23:57:49.284566 1790585 containerd.go:394] Building image: /var/lib/minikube/build/build.1001819658
I1001 23:57:49.284640 1790585 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1001819658 --local dockerfile=/var/lib/minikube/build/build.1001819658 --output type=image,name=localhost/my-image:functional-126185
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a44d9ab3249323cf78457c5a0949f16e44a6281949ba2bbdea4fb4f7f7618f04
#8 exporting manifest sha256:a44d9ab3249323cf78457c5a0949f16e44a6281949ba2bbdea4fb4f7f7618f04 0.0s done
#8 exporting config sha256:30194ae585458c4372d4aafff35a1e7d222e851d9821767080a2a39112b96a63 0.0s done
#8 naming to localhost/my-image:functional-126185 done
#8 DONE 0.1s
I1001 23:57:52.418792 1790585 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1001819658 --local dockerfile=/var/lib/minikube/build/build.1001819658 --output type=image,name=localhost/my-image:functional-126185: (3.134122842s)
I1001 23:57:52.418867 1790585 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1001819658
I1001 23:57:52.428109 1790585 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1001819658.tar
I1001 23:57:52.437053 1790585 build_images.go:217] Built localhost/my-image:functional-126185 from /tmp/build.1001819658.tar
I1001 23:57:52.437083 1790585 build_images.go:133] succeeded building to: functional-126185
I1001 23:57:52.437088 1790585 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-126185
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image load --daemon kicbase/echo-server:functional-126185 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-126185 image load --daemon kicbase/echo-server:functional-126185 --alsologtostderr: (1.117661265s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image load --daemon kicbase/echo-server:functional-126185 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-126185
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image load --daemon kicbase/echo-server:functional-126185 --alsologtostderr
2024/10/01 23:57:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image save kicbase/echo-server:functional-126185 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image rm kicbase/echo-server:functional-126185 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-126185
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-126185 image save --daemon kicbase/echo-server:functional-126185 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-126185
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-126185
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-126185
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-126185
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (115.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-855709 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1001 23:58:09.997325 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:59:31.920597 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-855709 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m54.463685169s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (115.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (33.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-855709 -- rollout status deployment/busybox: (30.449816619s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-8nm2b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-jcqp5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-rh455 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-8nm2b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-jcqp5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-rh455 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-8nm2b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-jcqp5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-rh455 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (33.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-8nm2b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-8nm2b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-jcqp5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-jcqp5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-rh455 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-855709 -- exec busybox-7dff88458-rh455 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-855709 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-855709 -v=7 --alsologtostderr: (20.987303953s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-855709 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp testdata/cp-test.txt ha-855709:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile469538674/001/cp-test_ha-855709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709:/home/docker/cp-test.txt ha-855709-m02:/home/docker/cp-test_ha-855709_ha-855709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m02 "sudo cat /home/docker/cp-test_ha-855709_ha-855709-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709:/home/docker/cp-test.txt ha-855709-m03:/home/docker/cp-test_ha-855709_ha-855709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m03 "sudo cat /home/docker/cp-test_ha-855709_ha-855709-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709:/home/docker/cp-test.txt ha-855709-m04:/home/docker/cp-test_ha-855709_ha-855709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m04 "sudo cat /home/docker/cp-test_ha-855709_ha-855709-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp testdata/cp-test.txt ha-855709-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile469538674/001/cp-test_ha-855709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m02:/home/docker/cp-test.txt ha-855709:/home/docker/cp-test_ha-855709-m02_ha-855709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709 "sudo cat /home/docker/cp-test_ha-855709-m02_ha-855709.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m02:/home/docker/cp-test.txt ha-855709-m03:/home/docker/cp-test_ha-855709-m02_ha-855709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m03 "sudo cat /home/docker/cp-test_ha-855709-m02_ha-855709-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m02:/home/docker/cp-test.txt ha-855709-m04:/home/docker/cp-test_ha-855709-m02_ha-855709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m04 "sudo cat /home/docker/cp-test_ha-855709-m02_ha-855709-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp testdata/cp-test.txt ha-855709-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile469538674/001/cp-test_ha-855709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m03:/home/docker/cp-test.txt ha-855709:/home/docker/cp-test_ha-855709-m03_ha-855709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709 "sudo cat /home/docker/cp-test_ha-855709-m03_ha-855709.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m03:/home/docker/cp-test.txt ha-855709-m02:/home/docker/cp-test_ha-855709-m03_ha-855709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m02 "sudo cat /home/docker/cp-test_ha-855709-m03_ha-855709-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m03:/home/docker/cp-test.txt ha-855709-m04:/home/docker/cp-test_ha-855709-m03_ha-855709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m04 "sudo cat /home/docker/cp-test_ha-855709-m03_ha-855709-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp testdata/cp-test.txt ha-855709-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile469538674/001/cp-test_ha-855709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m04:/home/docker/cp-test.txt ha-855709:/home/docker/cp-test_ha-855709-m04_ha-855709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709 "sudo cat /home/docker/cp-test_ha-855709-m04_ha-855709.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m04:/home/docker/cp-test.txt ha-855709-m02:/home/docker/cp-test_ha-855709-m04_ha-855709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m02 "sudo cat /home/docker/cp-test_ha-855709-m04_ha-855709-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 cp ha-855709-m04:/home/docker/cp-test.txt ha-855709-m03:/home/docker/cp-test_ha-855709-m04_ha-855709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 ssh -n ha-855709-m03 "sudo cat /home/docker/cp-test_ha-855709-m04_ha-855709-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-855709 node stop m02 -v=7 --alsologtostderr: (12.063456262s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-855709 status -v=7 --alsologtostderr: exit status 7 (753.893275ms)

                                                
                                                
-- stdout --
	ha-855709
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-855709-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-855709-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-855709-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:01:19.059790 1806730 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:01:19.059969 1806730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:01:19.059981 1806730 out.go:358] Setting ErrFile to fd 2...
	I1002 00:01:19.059987 1806730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:01:19.060398 1806730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	I1002 00:01:19.060656 1806730 out.go:352] Setting JSON to false
	I1002 00:01:19.060700 1806730 mustload.go:65] Loading cluster: ha-855709
	I1002 00:01:19.060796 1806730 notify.go:220] Checking for updates...
	I1002 00:01:19.061220 1806730 config.go:182] Loaded profile config "ha-855709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1002 00:01:19.061246 1806730 status.go:174] checking status of ha-855709 ...
	I1002 00:01:19.061860 1806730 cli_runner.go:164] Run: docker container inspect ha-855709 --format={{.State.Status}}
	I1002 00:01:19.082229 1806730 status.go:371] ha-855709 host status = "Running" (err=<nil>)
	I1002 00:01:19.082261 1806730 host.go:66] Checking if "ha-855709" exists ...
	I1002 00:01:19.082661 1806730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-855709
	I1002 00:01:19.121024 1806730 host.go:66] Checking if "ha-855709" exists ...
	I1002 00:01:19.121353 1806730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:01:19.121408 1806730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-855709
	I1002 00:01:19.140708 1806730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34685 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/ha-855709/id_rsa Username:docker}
	I1002 00:01:19.237657 1806730 ssh_runner.go:195] Run: systemctl --version
	I1002 00:01:19.241899 1806730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:01:19.254827 1806730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:01:19.311835 1806730 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-02 00:01:19.296095248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:01:19.312581 1806730 kubeconfig.go:125] found "ha-855709" server: "https://192.168.49.254:8443"
	I1002 00:01:19.312619 1806730 api_server.go:166] Checking apiserver status ...
	I1002 00:01:19.312674 1806730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:01:19.328161 1806730 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1512/cgroup
	I1002 00:01:19.338639 1806730 api_server.go:182] apiserver freezer: "3:freezer:/docker/ea0676ff687755b7e1e3a34fe69d8ee3f30b7c6cac0e437ffb6c5dc1fa90b2a4/kubepods/burstable/pod6033ab319040e957e231d240d89eead0/42dbccc19d22ebf3ad91e7593445f5fb18c7032005d84c4c33e65c393d9d95b2"
	I1002 00:01:19.338719 1806730 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ea0676ff687755b7e1e3a34fe69d8ee3f30b7c6cac0e437ffb6c5dc1fa90b2a4/kubepods/burstable/pod6033ab319040e957e231d240d89eead0/42dbccc19d22ebf3ad91e7593445f5fb18c7032005d84c4c33e65c393d9d95b2/freezer.state
	I1002 00:01:19.348859 1806730 api_server.go:204] freezer state: "THAWED"
	I1002 00:01:19.348888 1806730 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 00:01:19.357453 1806730 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 00:01:19.357484 1806730 status.go:463] ha-855709 apiserver status = Running (err=<nil>)
	I1002 00:01:19.357495 1806730 status.go:176] ha-855709 status: &{Name:ha-855709 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:01:19.357517 1806730 status.go:174] checking status of ha-855709-m02 ...
	I1002 00:01:19.357871 1806730 cli_runner.go:164] Run: docker container inspect ha-855709-m02 --format={{.State.Status}}
	I1002 00:01:19.376794 1806730 status.go:371] ha-855709-m02 host status = "Stopped" (err=<nil>)
	I1002 00:01:19.376818 1806730 status.go:384] host is not running, skipping remaining checks
	I1002 00:01:19.376825 1806730 status.go:176] ha-855709-m02 status: &{Name:ha-855709-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:01:19.376847 1806730 status.go:174] checking status of ha-855709-m03 ...
	I1002 00:01:19.377157 1806730 cli_runner.go:164] Run: docker container inspect ha-855709-m03 --format={{.State.Status}}
	I1002 00:01:19.397099 1806730 status.go:371] ha-855709-m03 host status = "Running" (err=<nil>)
	I1002 00:01:19.397130 1806730 host.go:66] Checking if "ha-855709-m03" exists ...
	I1002 00:01:19.397436 1806730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-855709-m03
	I1002 00:01:19.418748 1806730 host.go:66] Checking if "ha-855709-m03" exists ...
	I1002 00:01:19.419711 1806730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:01:19.420329 1806730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-855709-m03
	I1002 00:01:19.438709 1806730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34695 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/ha-855709-m03/id_rsa Username:docker}
	I1002 00:01:19.534048 1806730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:01:19.545839 1806730 kubeconfig.go:125] found "ha-855709" server: "https://192.168.49.254:8443"
	I1002 00:01:19.545869 1806730 api_server.go:166] Checking apiserver status ...
	I1002 00:01:19.545909 1806730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:01:19.557400 1806730 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1337/cgroup
	I1002 00:01:19.567491 1806730 api_server.go:182] apiserver freezer: "3:freezer:/docker/162a8da2399b1d838d96f70e7dfe8521af977af9abd8c971c0a0c336c8e1a241/kubepods/burstable/podb6d3f8a7214da23e75d03b83df17af40/4a64d51f58478a3b6eabd95391d08cb50bb133c616f990075b00a93ef5955b9b"
	I1002 00:01:19.567559 1806730 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/162a8da2399b1d838d96f70e7dfe8521af977af9abd8c971c0a0c336c8e1a241/kubepods/burstable/podb6d3f8a7214da23e75d03b83df17af40/4a64d51f58478a3b6eabd95391d08cb50bb133c616f990075b00a93ef5955b9b/freezer.state
	I1002 00:01:19.576270 1806730 api_server.go:204] freezer state: "THAWED"
	I1002 00:01:19.576342 1806730 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 00:01:19.584354 1806730 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 00:01:19.584402 1806730 status.go:463] ha-855709-m03 apiserver status = Running (err=<nil>)
	I1002 00:01:19.584413 1806730 status.go:176] ha-855709-m03 status: &{Name:ha-855709-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:01:19.584432 1806730 status.go:174] checking status of ha-855709-m04 ...
	I1002 00:01:19.584785 1806730 cli_runner.go:164] Run: docker container inspect ha-855709-m04 --format={{.State.Status}}
	I1002 00:01:19.601523 1806730 status.go:371] ha-855709-m04 host status = "Running" (err=<nil>)
	I1002 00:01:19.601547 1806730 host.go:66] Checking if "ha-855709-m04" exists ...
	I1002 00:01:19.601844 1806730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-855709-m04
	I1002 00:01:19.619129 1806730 host.go:66] Checking if "ha-855709-m04" exists ...
	I1002 00:01:19.619438 1806730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:01:19.619494 1806730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-855709-m04
	I1002 00:01:19.644763 1806730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34700 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/ha-855709-m04/id_rsa Username:docker}
	I1002 00:01:19.745734 1806730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:01:19.757586 1806730 status.go:176] ha-855709-m04 status: &{Name:ha-855709-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-855709 node start m02 -v=7 --alsologtostderr: (17.573756518s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-855709 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-855709 -v=7 --alsologtostderr
E1002 00:01:48.057979 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:02.580763 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:02.587115 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:02.598528 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:02.619890 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:02.661247 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:02.742591 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:02.904068 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:03.225649 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:03.867576 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:05.149017 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:07.710921 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:12.832879 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:15.761956 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-855709 -v=7 --alsologtostderr: (37.570323182s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-855709 --wait=true -v=7 --alsologtostderr
E1002 00:02:23.074806 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:02:43.556583 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:03:24.517975 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-855709 --wait=true -v=7 --alsologtostderr: (1m32.33990801s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-855709
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-855709 node delete m03 -v=7 --alsologtostderr: (8.759338332s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-855709 stop -v=7 --alsologtostderr: (35.890430576s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-855709 status -v=7 --alsologtostderr: exit status 7 (108.875054ms)

                                                
                                                
-- stdout --
	ha-855709
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-855709-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-855709-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:04:36.565121 1821248 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:04:36.565332 1821248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:04:36.565359 1821248 out.go:358] Setting ErrFile to fd 2...
	I1002 00:04:36.565379 1821248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:04:36.565660 1821248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	I1002 00:04:36.565886 1821248 out.go:352] Setting JSON to false
	I1002 00:04:36.565944 1821248 mustload.go:65] Loading cluster: ha-855709
	I1002 00:04:36.565991 1821248 notify.go:220] Checking for updates...
	I1002 00:04:36.566413 1821248 config.go:182] Loaded profile config "ha-855709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1002 00:04:36.566432 1821248 status.go:174] checking status of ha-855709 ...
	I1002 00:04:36.567001 1821248 cli_runner.go:164] Run: docker container inspect ha-855709 --format={{.State.Status}}
	I1002 00:04:36.584744 1821248 status.go:371] ha-855709 host status = "Stopped" (err=<nil>)
	I1002 00:04:36.584768 1821248 status.go:384] host is not running, skipping remaining checks
	I1002 00:04:36.584775 1821248 status.go:176] ha-855709 status: &{Name:ha-855709 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:04:36.584798 1821248 status.go:174] checking status of ha-855709-m02 ...
	I1002 00:04:36.585116 1821248 cli_runner.go:164] Run: docker container inspect ha-855709-m02 --format={{.State.Status}}
	I1002 00:04:36.606773 1821248 status.go:371] ha-855709-m02 host status = "Stopped" (err=<nil>)
	I1002 00:04:36.606793 1821248 status.go:384] host is not running, skipping remaining checks
	I1002 00:04:36.606801 1821248 status.go:176] ha-855709-m02 status: &{Name:ha-855709-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:04:36.606819 1821248 status.go:174] checking status of ha-855709-m04 ...
	I1002 00:04:36.607119 1821248 cli_runner.go:164] Run: docker container inspect ha-855709-m04 --format={{.State.Status}}
	I1002 00:04:36.625605 1821248 status.go:371] ha-855709-m04 host status = "Stopped" (err=<nil>)
	I1002 00:04:36.625626 1821248 status.go:384] host is not running, skipping remaining checks
	I1002 00:04:36.625632 1821248 status.go:176] ha-855709-m04 status: &{Name:ha-855709-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (66.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-855709 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1002 00:04:46.439394 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-855709 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m6.001619452s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (66.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-855709 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-855709 --control-plane -v=7 --alsologtostderr: (42.343313674s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-855709 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-940445 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1002 00:06:48.056822 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:07:02.577664 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-940445 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (48.103234739s)
--- PASS: TestJSONOutput/start/Command (48.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-940445 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-940445 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-940445 --output=json --user=testUser
E1002 00:07:30.280857 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-940445 --output=json --user=testUser: (5.850555335s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-333511 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-333511 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.937628ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"17fd96ac-02a3-4559-a804-143831cd2826","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-333511] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"444f72db-6107-4bdb-af95-687c015b5030","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19740"}}
	{"specversion":"1.0","id":"a4acc4ce-e016-4d60-add1-f61dcacb06d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c9117fe9-fce5-4d68-99c7-f325bba5e470","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig"}}
	{"specversion":"1.0","id":"10c73c6d-4863-4e33-886a-8bee13235fb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube"}}
	{"specversion":"1.0","id":"99a68128-deed-4d73-9996-d99fbd0baf06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a0543572-1642-4647-a360-defd0673666b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c77b2bbd-f64d-45b6-9e9f-55dfab720487","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-333511" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-333511
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-148803 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-148803 --network=: (34.677685671s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-148803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-148803
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-148803: (1.960179415s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.66s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-044254 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-044254 --network=bridge: (30.914961462s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-044254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-044254
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-044254: (1.937619412s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.87s)

                                                
                                    
x
+
TestKicExistingNetwork (31.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 00:08:46.088024 1750505 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 00:08:46.106658 1750505 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 00:08:46.106738 1750505 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 00:08:46.106765 1750505 cli_runner.go:164] Run: docker network inspect existing-network
W1002 00:08:46.123133 1750505 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 00:08:46.123167 1750505 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 00:08:46.123183 1750505 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 00:08:46.123284 1750505 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 00:08:46.139335 1750505 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-51fdb763b1b3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3d:4a:1e:0c} reservation:<nil>}
I1002 00:08:46.139700 1750505 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c77630}
I1002 00:08:46.139754 1750505 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1002 00:08:46.139807 1750505 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 00:08:46.211934 1750505 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-958848 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-958848 --network=existing-network: (29.503656378s)
helpers_test.go:175: Cleaning up "existing-network-958848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-958848
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-958848: (2.007699425s)
I1002 00:09:17.739543 1750505 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.67s)

                                                
                                    
x
+
TestKicCustomSubnet (33.15s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-530209 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-530209 --subnet=192.168.60.0/24: (31.078263793s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-530209 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-530209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-530209
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-530209: (2.050511264s)
--- PASS: TestKicCustomSubnet (33.15s)

                                                
                                    
x
+
TestKicStaticIP (33.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-310174 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-310174 --static-ip=192.168.200.200: (31.492929491s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-310174 ip
helpers_test.go:175: Cleaning up "static-ip-310174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-310174
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-310174: (2.095193799s)
--- PASS: TestKicStaticIP (33.74s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (63.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-961045 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-961045 --driver=docker  --container-runtime=containerd: (28.10456761s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-963711 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-963711 --driver=docker  --container-runtime=containerd: (30.355116397s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-961045
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-963711
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-963711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-963711
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-963711: (1.948965946s)
helpers_test.go:175: Cleaning up "first-961045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-961045
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-961045: (2.217878894s)
--- PASS: TestMinikubeProfile (63.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-320559 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-320559 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.818166245s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-320559 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-322339 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-322339 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.926373039s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-322339 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-320559 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-320559 --alsologtostderr -v=5: (1.595653633s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-322339 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-322339
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-322339: (1.196473186s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.92s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-322339
E1002 00:11:48.056920 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-322339: (6.924458881s)
--- PASS: TestMountStart/serial/RestartStopped (7.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-322339 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-070690 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1002 00:12:02.577988 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-070690 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m5.752370116s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (56.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- rollout status deployment/busybox
E1002 00:13:11.125543 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-070690 -- rollout status deployment/busybox: (54.543157562s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- exec busybox-7dff88458-c8ttt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- exec busybox-7dff88458-fn4jl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- exec busybox-7dff88458-c8ttt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- exec busybox-7dff88458-fn4jl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- exec busybox-7dff88458-c8ttt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- exec busybox-7dff88458-fn4jl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (56.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- exec busybox-7dff88458-c8ttt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- exec busybox-7dff88458-c8ttt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- exec busybox-7dff88458-fn4jl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-070690 -- exec busybox-7dff88458-fn4jl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-070690 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-070690 -v 3 --alsologtostderr: (14.999051009s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-070690 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp testdata/cp-test.txt multinode-070690:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp multinode-070690:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4040035073/001/cp-test_multinode-070690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp multinode-070690:/home/docker/cp-test.txt multinode-070690-m02:/home/docker/cp-test_multinode-070690_multinode-070690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m02 "sudo cat /home/docker/cp-test_multinode-070690_multinode-070690-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp multinode-070690:/home/docker/cp-test.txt multinode-070690-m03:/home/docker/cp-test_multinode-070690_multinode-070690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m03 "sudo cat /home/docker/cp-test_multinode-070690_multinode-070690-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp testdata/cp-test.txt multinode-070690-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp multinode-070690-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4040035073/001/cp-test_multinode-070690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp multinode-070690-m02:/home/docker/cp-test.txt multinode-070690:/home/docker/cp-test_multinode-070690-m02_multinode-070690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690 "sudo cat /home/docker/cp-test_multinode-070690-m02_multinode-070690.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp multinode-070690-m02:/home/docker/cp-test.txt multinode-070690-m03:/home/docker/cp-test_multinode-070690-m02_multinode-070690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m03 "sudo cat /home/docker/cp-test_multinode-070690-m02_multinode-070690-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp testdata/cp-test.txt multinode-070690-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp multinode-070690-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4040035073/001/cp-test_multinode-070690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp multinode-070690-m03:/home/docker/cp-test.txt multinode-070690:/home/docker/cp-test_multinode-070690-m03_multinode-070690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690 "sudo cat /home/docker/cp-test_multinode-070690-m03_multinode-070690.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 cp multinode-070690-m03:/home/docker/cp-test.txt multinode-070690-m02:/home/docker/cp-test_multinode-070690-m03_multinode-070690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 ssh -n multinode-070690-m02 "sudo cat /home/docker/cp-test_multinode-070690-m03_multinode-070690-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-070690 node stop m03: (1.201418182s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-070690 status: exit status 7 (498.419505ms)

                                                
                                                
-- stdout --
	multinode-070690
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-070690-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-070690-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-070690 status --alsologtostderr: exit status 7 (494.852863ms)

                                                
                                                
-- stdout --
	multinode-070690
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-070690-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-070690-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:14:26.226671 1874673 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:14:26.226837 1874673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:14:26.226876 1874673 out.go:358] Setting ErrFile to fd 2...
	I1002 00:14:26.226898 1874673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:14:26.227156 1874673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	I1002 00:14:26.227384 1874673 out.go:352] Setting JSON to false
	I1002 00:14:26.227455 1874673 mustload.go:65] Loading cluster: multinode-070690
	I1002 00:14:26.227535 1874673 notify.go:220] Checking for updates...
	I1002 00:14:26.227931 1874673 config.go:182] Loaded profile config "multinode-070690": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1002 00:14:26.227967 1874673 status.go:174] checking status of multinode-070690 ...
	I1002 00:14:26.228619 1874673 cli_runner.go:164] Run: docker container inspect multinode-070690 --format={{.State.Status}}
	I1002 00:14:26.248255 1874673 status.go:371] multinode-070690 host status = "Running" (err=<nil>)
	I1002 00:14:26.248275 1874673 host.go:66] Checking if "multinode-070690" exists ...
	I1002 00:14:26.248682 1874673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-070690
	I1002 00:14:26.280572 1874673 host.go:66] Checking if "multinode-070690" exists ...
	I1002 00:14:26.280887 1874673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:14:26.280934 1874673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070690
	I1002 00:14:26.297818 1874673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34805 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/multinode-070690/id_rsa Username:docker}
	I1002 00:14:26.390226 1874673 ssh_runner.go:195] Run: systemctl --version
	I1002 00:14:26.394516 1874673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:14:26.406530 1874673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:14:26.457631 1874673 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-02 00:14:26.447483034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:14:26.458232 1874673 kubeconfig.go:125] found "multinode-070690" server: "https://192.168.67.2:8443"
	I1002 00:14:26.458266 1874673 api_server.go:166] Checking apiserver status ...
	I1002 00:14:26.458313 1874673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:14:26.469117 1874673 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1449/cgroup
	I1002 00:14:26.478357 1874673 api_server.go:182] apiserver freezer: "3:freezer:/docker/1a404bd009e516e3047e44dd84bbba811131867a6e044ceea366e15821be0976/kubepods/burstable/podda1e4441c9f9e322321a3e1b23d5b050/2645d39d0719cbe88d8472cd5cc7f10cdc2d80d89afd0f4afd14d8a2edbea029"
	I1002 00:14:26.478428 1874673 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1a404bd009e516e3047e44dd84bbba811131867a6e044ceea366e15821be0976/kubepods/burstable/podda1e4441c9f9e322321a3e1b23d5b050/2645d39d0719cbe88d8472cd5cc7f10cdc2d80d89afd0f4afd14d8a2edbea029/freezer.state
	I1002 00:14:26.486916 1874673 api_server.go:204] freezer state: "THAWED"
	I1002 00:14:26.486945 1874673 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 00:14:26.494861 1874673 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 00:14:26.494890 1874673 status.go:463] multinode-070690 apiserver status = Running (err=<nil>)
	I1002 00:14:26.494901 1874673 status.go:176] multinode-070690 status: &{Name:multinode-070690 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:14:26.494917 1874673 status.go:174] checking status of multinode-070690-m02 ...
	I1002 00:14:26.495220 1874673 cli_runner.go:164] Run: docker container inspect multinode-070690-m02 --format={{.State.Status}}
	I1002 00:14:26.510575 1874673 status.go:371] multinode-070690-m02 host status = "Running" (err=<nil>)
	I1002 00:14:26.510603 1874673 host.go:66] Checking if "multinode-070690-m02" exists ...
	I1002 00:14:26.510924 1874673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-070690-m02
	I1002 00:14:26.526117 1874673 host.go:66] Checking if "multinode-070690-m02" exists ...
	I1002 00:14:26.526434 1874673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 00:14:26.526486 1874673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070690-m02
	I1002 00:14:26.543035 1874673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34810 SSHKeyPath:/home/jenkins/minikube-integration/19740-1745120/.minikube/machines/multinode-070690-m02/id_rsa Username:docker}
	I1002 00:14:26.633500 1874673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:14:26.647339 1874673 status.go:176] multinode-070690-m02 status: &{Name:multinode-070690-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:14:26.647372 1874673 status.go:174] checking status of multinode-070690-m03 ...
	I1002 00:14:26.647680 1874673 cli_runner.go:164] Run: docker container inspect multinode-070690-m03 --format={{.State.Status}}
	I1002 00:14:26.665971 1874673 status.go:371] multinode-070690-m03 host status = "Stopped" (err=<nil>)
	I1002 00:14:26.665991 1874673 status.go:384] host is not running, skipping remaining checks
	I1002 00:14:26.665998 1874673 status.go:176] multinode-070690-m03 status: &{Name:multinode-070690-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-070690 node start m03 -v=7 --alsologtostderr: (8.682710161s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-070690
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-070690
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-070690: (24.918681005s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-070690 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-070690 --wait=true -v=8 --alsologtostderr: (1m37.714346074s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-070690
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-070690 node delete m03: (4.744968188s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 stop
E1002 00:16:48.056871 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:17:02.580303 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-070690 stop: (23.835282085s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-070690 status: exit status 7 (91.1906ms)

                                                
                                                
-- stdout --
	multinode-070690
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-070690-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-070690 status --alsologtostderr: exit status 7 (94.699725ms)

                                                
                                                
-- stdout --
	multinode-070690
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-070690-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:17:08.285501 1883094 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:17:08.285699 1883094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:17:08.285726 1883094 out.go:358] Setting ErrFile to fd 2...
	I1002 00:17:08.285744 1883094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:17:08.286025 1883094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	I1002 00:17:08.286248 1883094 out.go:352] Setting JSON to false
	I1002 00:17:08.286308 1883094 mustload.go:65] Loading cluster: multinode-070690
	I1002 00:17:08.286396 1883094 notify.go:220] Checking for updates...
	I1002 00:17:08.286806 1883094 config.go:182] Loaded profile config "multinode-070690": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1002 00:17:08.286847 1883094 status.go:174] checking status of multinode-070690 ...
	I1002 00:17:08.287486 1883094 cli_runner.go:164] Run: docker container inspect multinode-070690 --format={{.State.Status}}
	I1002 00:17:08.305432 1883094 status.go:371] multinode-070690 host status = "Stopped" (err=<nil>)
	I1002 00:17:08.305452 1883094 status.go:384] host is not running, skipping remaining checks
	I1002 00:17:08.305459 1883094 status.go:176] multinode-070690 status: &{Name:multinode-070690 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 00:17:08.305488 1883094 status.go:174] checking status of multinode-070690-m02 ...
	I1002 00:17:08.305802 1883094 cli_runner.go:164] Run: docker container inspect multinode-070690-m02 --format={{.State.Status}}
	I1002 00:17:08.333159 1883094 status.go:371] multinode-070690-m02 host status = "Stopped" (err=<nil>)
	I1002 00:17:08.333178 1883094 status.go:384] host is not running, skipping remaining checks
	I1002 00:17:08.333184 1883094 status.go:176] multinode-070690-m02 status: &{Name:multinode-070690-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-070690 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-070690 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (56.046810146s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-070690 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-070690
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-070690-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-070690-m02 --driver=docker  --container-runtime=containerd: exit status 14 (82.542873ms)

                                                
                                                
-- stdout --
	* [multinode-070690-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-070690-m02' is duplicated with machine name 'multinode-070690-m02' in profile 'multinode-070690'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-070690-m03 --driver=docker  --container-runtime=containerd
E1002 00:18:25.643058 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-070690-m03 --driver=docker  --container-runtime=containerd: (30.568029268s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-070690
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-070690: exit status 80 (290.108563ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-070690 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-070690-m03 already exists in multinode-070690-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-070690-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-070690-m03: (1.919931669s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.91s)

                                                
                                    
x
+
TestPreload (113.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-473380 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-473380 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m16.703219798s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-473380 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-473380 image pull gcr.io/k8s-minikube/busybox: (2.187485952s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-473380
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-473380: (12.105532449s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-473380 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-473380 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (19.818868247s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-473380 image list
helpers_test.go:175: Cleaning up "test-preload-473380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-473380
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-473380: (2.408714252s)
--- PASS: TestPreload (113.56s)

                                                
                                    
x
+
TestScheduledStopUnix (104.34s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-912395 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-912395 --memory=2048 --driver=docker  --container-runtime=containerd: (28.128756768s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-912395 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-912395 -n scheduled-stop-912395
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-912395 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 00:21:03.959335 1750505 retry.go:31] will retry after 79.175µs: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.959756 1750505 retry.go:31] will retry after 195.47µs: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.960933 1750505 retry.go:31] will retry after 210.613µs: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.962053 1750505 retry.go:31] will retry after 207.943µs: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.963170 1750505 retry.go:31] will retry after 655.528µs: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.964252 1750505 retry.go:31] will retry after 497.478µs: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.965361 1750505 retry.go:31] will retry after 671.983µs: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.966477 1750505 retry.go:31] will retry after 1.659625ms: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.968661 1750505 retry.go:31] will retry after 3.175131ms: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.972867 1750505 retry.go:31] will retry after 2.228037ms: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.976166 1750505 retry.go:31] will retry after 5.486527ms: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.982390 1750505 retry.go:31] will retry after 11.235472ms: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:03.994600 1750505 retry.go:31] will retry after 6.622008ms: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:04.001860 1750505 retry.go:31] will retry after 21.125303ms: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:04.024104 1750505 retry.go:31] will retry after 16.186252ms: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
I1002 00:21:04.041319 1750505 retry.go:31] will retry after 25.279029ms: open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/scheduled-stop-912395/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-912395 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-912395 -n scheduled-stop-912395
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-912395
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-912395 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1002 00:21:48.056906 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:22:02.578592 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-912395
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-912395: exit status 7 (73.914782ms)

                                                
                                                
-- stdout --
	scheduled-stop-912395
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-912395 -n scheduled-stop-912395
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-912395 -n scheduled-stop-912395: exit status 7 (68.70272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-912395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-912395
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-912395: (4.77693047s)
--- PASS: TestScheduledStopUnix (104.34s)

                                                
                                    
x
+
TestInsufficientStorage (10.26s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-052631 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-052631 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.847390788s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c1a3d233-914b-4b0b-ba15-a1d1a7f2e915","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-052631] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3136167d-2181-4c84-9107-18f06cccbf2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19740"}}
	{"specversion":"1.0","id":"bb4a076e-efd4-4079-979a-f24c9856bbfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dac77d37-879e-4b63-8172-a297ec69b600","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig"}}
	{"specversion":"1.0","id":"71cec4d4-6c39-4426-af67-372930d04d14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube"}}
	{"specversion":"1.0","id":"a673ed3f-4595-4c2e-b6ed-caea684a2d9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"04c46488-50eb-4e3d-8473-10e04dc43eee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e775c177-518d-4e82-b5ae-d745a29bd04c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4c268147-d000-41ab-a62b-c1cc88b69d8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8e598eda-280f-4267-8ffe-4becd13f1880","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"18631146-3bee-4f37-987f-a2819639ab19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b5f4b149-7b2e-46ba-a089-42ca5da1b980","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-052631\" primary control-plane node in \"insufficient-storage-052631\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb18381b-dfab-4f3f-b62c-a203e9e96d14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727731891-master ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"573c4a4e-cbf3-4f2f-9668-61d4cde3f8c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0f250c4-16bf-4952-9903-c69ce75954a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-052631 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-052631 --output=json --layout=cluster: exit status 7 (298.961594ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-052631","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-052631","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:22:27.835887 1901721 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-052631" does not appear in /home/jenkins/minikube-integration/19740-1745120/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-052631 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-052631 --output=json --layout=cluster: exit status 7 (272.345399ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-052631","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-052631","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:22:28.109433 1901781 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-052631" does not appear in /home/jenkins/minikube-integration/19740-1745120/kubeconfig
	E1002 00:22:28.119368 1901781 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/insufficient-storage-052631/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-052631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-052631
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-052631: (1.84191368s)
--- PASS: TestInsufficientStorage (10.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.78s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2167264918 start -p running-upgrade-933136 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2167264918 start -p running-upgrade-933136 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (33.788446988s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-933136 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-933136 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.368974077s)
helpers_test.go:175: Cleaning up "running-upgrade-933136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-933136
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-933136: (2.906479195s)
--- PASS: TestRunningBinaryUpgrade (74.78s)

                                                
                                    
x
+
TestKubernetesUpgrade (352.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-598018 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-598018 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.740672086s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-598018
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-598018: (1.279949729s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-598018 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-598018 status --format={{.Host}}: exit status 7 (91.853203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-598018 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-598018 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m41.644312808s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-598018 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-598018 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-598018 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (93.106806ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-598018] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-598018
	    minikube start -p kubernetes-upgrade-598018 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5980182 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-598018 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-598018 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-598018 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.441159785s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-598018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-598018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-598018: (2.940341678s)
--- PASS: TestKubernetesUpgrade (352.38s)

                                                
                                    
x
+
TestMissingContainerUpgrade (193.39s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.4021357197 start -p missing-upgrade-813383 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.4021357197 start -p missing-upgrade-813383 --memory=2200 --driver=docker  --container-runtime=containerd: (1m38.74726614s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-813383
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-813383
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-813383 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-813383 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m30.728155389s)
helpers_test.go:175: Cleaning up "missing-upgrade-813383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-813383
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-813383: (2.34315931s)
--- PASS: TestMissingContainerUpgrade (193.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-850174 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-850174 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (97.694419ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-850174] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-850174 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-850174 --driver=docker  --container-runtime=containerd: (35.933143507s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-850174 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-850174 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-850174 --no-kubernetes --driver=docker  --container-runtime=containerd: (17.291251889s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-850174 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-850174 status -o json: exit status 2 (288.597458ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-850174","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-850174
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-850174: (1.829563s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-850174 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-850174 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.368575451s)
--- PASS: TestNoKubernetes/serial/Start (6.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-850174 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-850174 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.128102ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-850174
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-850174: (1.207092772s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-850174 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-850174 --driver=docker  --container-runtime=containerd: (7.255802403s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-850174 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-850174 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.291274ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (103.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.887528405 start -p stopped-upgrade-925267 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.887528405 start -p stopped-upgrade-925267 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.601720926s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.887528405 -p stopped-upgrade-925267 stop
E1002 00:26:48.059190 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.887528405 -p stopped-upgrade-925267 stop: (19.958118792s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-925267 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1002 00:27:02.577851 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-925267 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.446701189s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (103.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-925267
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-925267: (1.300218481s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestPause/serial/Start (97.66s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-489543 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-489543 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m37.661506661s)
--- PASS: TestPause/serial/Start (97.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-811806 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-811806 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (171.626559ms)

                                                
                                                
-- stdout --
	* [false-811806] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:30:16.998566 1941066 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:30:16.998750 1941066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:30:16.998761 1941066 out.go:358] Setting ErrFile to fd 2...
	I1002 00:30:16.998778 1941066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:30:16.999059 1941066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-1745120/.minikube/bin
	I1002 00:30:16.999483 1941066 out.go:352] Setting JSON to false
	I1002 00:30:17.000525 1941066 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":29564,"bootTime":1727799453,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1002 00:30:17.000604 1941066 start.go:139] virtualization:  
	I1002 00:30:17.003534 1941066 out.go:177] * [false-811806] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1002 00:30:17.005887 1941066 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:30:17.006022 1941066 notify.go:220] Checking for updates...
	I1002 00:30:17.009552 1941066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:30:17.011581 1941066 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-1745120/kubeconfig
	I1002 00:30:17.013245 1941066 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-1745120/.minikube
	I1002 00:30:17.015210 1941066 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 00:30:17.017268 1941066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:30:17.019844 1941066 config.go:182] Loaded profile config "pause-489543": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1002 00:30:17.019963 1941066 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:30:17.054342 1941066 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1002 00:30:17.054467 1941066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 00:30:17.110957 1941066 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-02 00:30:17.100643126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1002 00:30:17.111113 1941066 docker.go:318] overlay module found
	I1002 00:30:17.113730 1941066 out.go:177] * Using the docker driver based on user configuration
	I1002 00:30:17.115598 1941066 start.go:297] selected driver: docker
	I1002 00:30:17.115618 1941066 start.go:901] validating driver "docker" against <nil>
	I1002 00:30:17.115633 1941066 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:30:17.118513 1941066 out.go:201] 
	W1002 00:30:17.120957 1941066 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1002 00:30:17.123119 1941066 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-811806 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-811806" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-811806" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:29:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-489543
contexts:
- context:
cluster: pause-489543
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:29:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-489543
name: pause-489543
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-489543
user:
client-certificate: /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/pause-489543/client.crt
client-key: /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/pause-489543/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-811806

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811806"

                                                
                                                
----------------------- debugLogs end: false-811806 [took: 3.230428803s] --------------------------------
helpers_test.go:175: Cleaning up "false-811806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-811806
--- PASS: TestNetworkPlugins/group/false (3.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-489543 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-489543 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.806480892s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.83s)

                                                
                                    
x
+
TestPause/serial/Pause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-489543 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.99s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-489543 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-489543 --output=json --layout=cluster: exit status 2 (354.908616ms)

                                                
                                                
-- stdout --
	{"Name":"pause-489543","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-489543","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-489543 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-489543 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-489543 --alsologtostderr -v=5: (1.031239238s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.86s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-489543 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-489543 --alsologtostderr -v=5: (2.860665754s)
--- PASS: TestPause/serial/DeletePaused (2.86s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-489543
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-489543: exit status 1 (19.821973ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-489543: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (140.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-920941 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1002 00:31:48.056409 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:32:02.577818 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-920941 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m20.536031312s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (140.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-920941 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eb5c6641-fffc-4476-a089-d210c5457072] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eb5c6641-fffc-4476-a089-d210c5457072] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00496436s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-920941 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-920941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-920941 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-920941 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-920941 --alsologtostderr -v=3: (12.037456875s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-920941 -n old-k8s-version-920941
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-920941 -n old-k8s-version-920941: exit status 7 (69.822673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-920941 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-643266 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1002 00:35:05.645355 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-643266 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m10.074072022s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-643266 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [23408707-ad49-47b4-a57c-d805d09b0a97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [23408707-ad49-47b4-a57c-d805d09b0a97] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003522821s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-643266 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-643266 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-643266 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.036958259s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-643266 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-643266 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-643266 --alsologtostderr -v=3: (12.125850903s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-643266 -n no-preload-643266
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-643266 -n no-preload-643266: exit status 7 (82.877131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-643266 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (301.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-643266 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1002 00:36:48.056399 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:37:02.577839 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-643266 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (5m1.477683396s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-643266 -n no-preload-643266
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (301.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g94cm" [bd4dc5f3-4296-4285-b523-c58ddbde05ca] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004468335s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g94cm" [bd4dc5f3-4296-4285-b523-c58ddbde05ca] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004695293s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-920941 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-920941 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-920941 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-920941 --alsologtostderr -v=1: (1.129186902s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-920941 -n old-k8s-version-920941
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-920941 -n old-k8s-version-920941: exit status 2 (368.863194ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-920941 -n old-k8s-version-920941
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-920941 -n old-k8s-version-920941: exit status 2 (294.688205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-920941 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-920941 -n old-k8s-version-920941
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-920941 -n old-k8s-version-920941
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-303193 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-303193 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m23.792051191s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-88gl8" [82d3e846-6a2d-4f5b-abac-f99cbdc25cfb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004312803s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-88gl8" [82d3e846-6a2d-4f5b-abac-f99cbdc25cfb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004731527s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-643266 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-643266 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-643266 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-643266 -n no-preload-643266
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-643266 -n no-preload-643266: exit status 2 (387.820562ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-643266 -n no-preload-643266
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-643266 -n no-preload-643266: exit status 2 (444.760712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-643266 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-643266 -n no-preload-643266
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-643266 -n no-preload-643266
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-405840 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1002 00:41:48.056202 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:42:02.578121 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-405840 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m21.479740839s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-303193 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a1d0f922-695a-47f9-826f-c8be5e8cf267] Pending
helpers_test.go:344: "busybox" [a1d0f922-695a-47f9-826f-c8be5e8cf267] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a1d0f922-695a-47f9-826f-c8be5e8cf267] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004483064s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-303193 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-303193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-303193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.083126062s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-303193 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-303193 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-303193 --alsologtostderr -v=3: (12.069645325s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-303193 -n embed-certs-303193
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-303193 -n embed-certs-303193: exit status 7 (62.704084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-303193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-303193 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-303193 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.652423797s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-303193 -n embed-certs-303193
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-405840 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c4dc4a8e-a64b-4d98-adb9-384570c66812] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c4dc4a8e-a64b-4d98-adb9-384570c66812] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003709793s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-405840 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-405840 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-405840 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.411110047s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-405840 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-405840 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-405840 --alsologtostderr -v=3: (13.139432851s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-405840 -n default-k8s-diff-port-405840
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-405840 -n default-k8s-diff-port-405840: exit status 7 (71.595607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-405840 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-405840 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1002 00:44:05.598366 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:05.604805 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:05.616289 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:05.637691 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:05.679135 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:05.760561 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:05.921933 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:06.243665 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:06.885665 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:08.167440 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:10.728850 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:15.850244 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:26.091621 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:44:46.573418 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:27.535624 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:40.345919 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:40.352253 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:40.363637 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:40.385084 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:40.426413 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:40.507729 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:40.669424 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:40.991101 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:41.633186 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:42.915183 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:45.477108 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:45:50.598536 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:46:00.839880 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:46:21.321270 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:46:31.128422 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:46:48.056295 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:46:49.457024 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:47:02.283577 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:47:02.577450 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-405840 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.580730177s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-405840 -n default-k8s-diff-port-405840
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vvbjc" [c09d36ca-e472-4faa-a5f6-1f6da793a381] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004350955s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vvbjc" [c09d36ca-e472-4faa-a5f6-1f6da793a381] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003984912s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-303193 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-303193 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-303193 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-303193 --alsologtostderr -v=1: (1.109078693s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-303193 -n embed-certs-303193
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-303193 -n embed-certs-303193: exit status 2 (394.627798ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-303193 -n embed-certs-303193
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-303193 -n embed-certs-303193: exit status 2 (348.995815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-303193 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-303193 -n embed-certs-303193
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-303193 -n embed-certs-303193
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-094344 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-094344 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (39.784599479s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6hv6h" [b9caa5eb-a68b-41bf-bf17-0cdb449cae52] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004318054s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6hv6h" [b9caa5eb-a68b-41bf-bf17-0cdb449cae52] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005210432s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-405840 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-405840 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-405840 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-405840 --alsologtostderr -v=1: (1.002471145s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-405840 -n default-k8s-diff-port-405840
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-405840 -n default-k8s-diff-port-405840: exit status 2 (367.890957ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-405840 -n default-k8s-diff-port-405840
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-405840 -n default-k8s-diff-port-405840: exit status 2 (329.000233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-405840 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-405840 --alsologtostderr -v=1: (1.219793466s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-405840 -n default-k8s-diff-port-405840
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-405840 -n default-k8s-diff-port-405840
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (57.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (57.790716252s)
--- PASS: TestNetworkPlugins/group/auto/Start (57.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-094344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-094344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.267122857s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-094344 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-094344 --alsologtostderr -v=3: (1.315728881s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-094344 -n newest-cni-094344
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-094344 -n newest-cni-094344: exit status 7 (83.702959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-094344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-094344 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1002 00:48:24.205625 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-094344 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (20.783705139s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-094344 -n newest-cni-094344
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-094344 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-094344 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-094344 -n newest-cni-094344
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-094344 -n newest-cni-094344: exit status 2 (347.75809ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-094344 -n newest-cni-094344
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-094344 -n newest-cni-094344: exit status 2 (342.92339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-094344 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-094344 -n newest-cni-094344
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-094344 -n newest-cni-094344
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.27s)
E1002 00:54:05.597965 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:54:06.755086 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/auto-811806/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:54:07.554667 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m31.410570002s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-811806 "pgrep -a kubelet"
I1002 00:48:56.136230 1750505 config.go:182] Loaded profile config "auto-811806": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-811806 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7rt2q" [d8221470-0486-416a-acfc-d5b5d22d303d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7rt2q" [d8221470-0486-416a-acfc-d5b5d22d303d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005429778s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-811806 exec deployment/netcat -- nslookup kubernetes.default
E1002 00:49:05.598216 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1002 00:49:33.298751 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/old-k8s-version-920941/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (58.169065091s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6hxgx" [86b42c39-ef64-4138-bf6f-1b64e7dfc9a1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004037815s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-811806 "pgrep -a kubelet"
I1002 00:50:18.194009 1750505 config.go:182] Loaded profile config "kindnet-811806": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-811806 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ftv99" [ee66e579-3a0a-499b-a4fc-bd3199991e1e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ftv99" [ee66e579-3a0a-499b-a4fc-bd3199991e1e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003703604s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-cccvr" [4ef2cea0-71d3-42ad-8625-62f4ca4fcd15] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004254236s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-811806 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-811806 "pgrep -a kubelet"
I1002 00:50:31.964909 1750505 config.go:182] Loaded profile config "calico-811806": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-811806 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-klwh6" [31ec908e-4f58-4114-94b1-c37cb4ac3705] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-klwh6" [31ec908e-4f58-4114-94b1-c37cb4ac3705] Running
E1002 00:50:40.346177 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005126479s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-811806 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.974168736s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (49.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1002 00:51:08.047082 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/no-preload-643266/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:51:45.646803 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:51:48.056759 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/addons-515343/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (49.02604135s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (49.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-811806 "pgrep -a kubelet"
I1002 00:51:49.872317 1750505 config.go:182] Loaded profile config "custom-flannel-811806": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-811806 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mnkqq" [d9a92db7-96c9-4721-827c-af1ac755a7f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mnkqq" [d9a92db7-96c9-4721-827c-af1ac755a7f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004873079s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-811806 "pgrep -a kubelet"
I1002 00:51:57.332046 1750505 config.go:182] Loaded profile config "enable-default-cni-811806": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-811806 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kd728" [5e86a60f-2dd9-4502-8830-7450e5546738] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kd728" [5e86a60f-2dd9-4502-8830-7450e5546738] Running
E1002 00:52:02.578159 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/functional-126185/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003663382s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-811806 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-811806 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m5.402520502s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1002 00:52:45.616638 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:52:45.622916 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:52:45.634221 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:52:45.655740 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:52:45.697029 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:52:45.778446 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:52:45.939869 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:52:46.261412 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:52:46.903209 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:52:48.184800 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:52:50.746527 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:52:55.868446 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:53:06.110515 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:53:26.592357 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/default-k8s-diff-port-405840/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-811806 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m15.99293341s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5lxkz" [7c9b80b0-f0fd-4782-be36-49bfde1f38f8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003920782s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-811806 "pgrep -a kubelet"
I1002 00:53:33.155651 1750505 config.go:182] Loaded profile config "flannel-811806": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-811806 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nzt26" [864c3cfb-7ecf-4c69-8703-08483efea872] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nzt26" [864c3cfb-7ecf-4c69-8703-08483efea872] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004409435s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-811806 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-811806 "pgrep -a kubelet"
I1002 00:53:47.139019 1750505 config.go:182] Loaded profile config "bridge-811806": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-811806 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-26c8d" [4c51030b-ada7-4e3e-98a1-f1d3b28898e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-26c8d" [4c51030b-ada7-4e3e-98a1-f1d3b28898e4] Running
E1002 00:53:56.501163 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/auto-811806/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:53:56.507500 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/auto-811806/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:53:56.518845 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/auto-811806/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:53:56.540205 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/auto-811806/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:53:56.581621 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/auto-811806/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:53:56.662978 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/auto-811806/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:53:56.825031 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/auto-811806/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:53:57.146719 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/auto-811806/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:53:57.788666 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/auto-811806/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:53:59.070948 1750505 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/auto-811806/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004380499s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-811806 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-811806 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-721980 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-721980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-721980
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-422272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-422272
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-811806 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-811806" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-811806" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:29:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-489543
contexts:
- context:
cluster: pause-489543
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:29:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-489543
name: pause-489543
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-489543
user:
client-certificate: /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/pause-489543/client.crt
client-key: /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/pause-489543/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-811806

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811806"

                                                
                                                
----------------------- debugLogs end: kubenet-811806 [took: 3.386159555s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-811806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-811806
--- SKIP: TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-811806 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-811806" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19740-1745120/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:29:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-489543
contexts:
- context:
cluster: pause-489543
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:29:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-489543
name: pause-489543
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-489543
user:
client-certificate: /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/pause-489543/client.crt
client-key: /home/jenkins/minikube-integration/19740-1745120/.minikube/profiles/pause-489543/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-811806

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-811806" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811806"

                                                
                                                
----------------------- debugLogs end: cilium-811806 [took: 4.189847547s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-811806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-811806
--- SKIP: TestNetworkPlugins/group/cilium (4.42s)

                                                
                                    
Copied to clipboard