Test Report: Docker_Linux_containerd_arm64 19734

                    
                      795b96072c2ea51545c2bdfc984dcdf8fe273799:2024-09-30:36435
                    
                

Test fail (2/327)

Order failed test Duration
29 TestAddons/serial/Volcano 199.81
301 TestStartStop/group/old-k8s-version/serial/SecondStart 376.06
x
+
TestAddons/serial/Volcano (199.81s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 56.233134ms
addons_test.go:843: volcano-admission stabilized in 56.370134ms
addons_test.go:835: volcano-scheduler stabilized in 56.413391ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-lqdvw" [9d94e34b-f457-4f5f-8164-949d6ee753d8] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003719536s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-zml2h" [b6546196-6794-44e4-af39-a44ef5a91568] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003956612s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-kmkvq" [f42bff10-9412-4b5b-9ced-860bfc776879] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003392773s
addons_test.go:870: (dbg) Run:  kubectl --context addons-472765 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-472765 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-472765 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [9759cea0-609d-42be-b718-a98a6b71dcc3] Pending
helpers_test.go:344: "test-job-nginx-0" [9759cea0-609d-42be-b718-a98a6b71dcc3] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:902: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:902: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-472765 -n addons-472765
addons_test.go:902: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-30 10:31:16.456797044 +0000 UTC m=+427.355762158
addons_test.go:902: (dbg) Run:  kubectl --context addons-472765 describe po test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-472765 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-6c29c150-f4b8-4686-aedd-55ba14795be7
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ndhwh (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-ndhwh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:902: (dbg) Run:  kubectl --context addons-472765 logs test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-472765 logs test-job-nginx-0 -n my-volcano:
addons_test.go:903: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-472765
helpers_test.go:235: (dbg) docker inspect addons-472765:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d67f5a96ab25388f63f73a9350c102907985efddf8dc33bcfd74c5ad95f157be",
	        "Created": "2024-09-30T10:24:46.734266074Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2545407,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-30T10:24:46.868493728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/d67f5a96ab25388f63f73a9350c102907985efddf8dc33bcfd74c5ad95f157be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d67f5a96ab25388f63f73a9350c102907985efddf8dc33bcfd74c5ad95f157be/hostname",
	        "HostsPath": "/var/lib/docker/containers/d67f5a96ab25388f63f73a9350c102907985efddf8dc33bcfd74c5ad95f157be/hosts",
	        "LogPath": "/var/lib/docker/containers/d67f5a96ab25388f63f73a9350c102907985efddf8dc33bcfd74c5ad95f157be/d67f5a96ab25388f63f73a9350c102907985efddf8dc33bcfd74c5ad95f157be-json.log",
	        "Name": "/addons-472765",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-472765:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-472765",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/684001100e946d98236433d1597d1b06b42baa4acd259ed12c86be27bb0d9280-init/diff:/var/lib/docker/overlay2/cfa9a1331be3f2237f098c9bbe24267823c6ebd2f4d869da3f0aaddb0fb064b7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/684001100e946d98236433d1597d1b06b42baa4acd259ed12c86be27bb0d9280/merged",
	                "UpperDir": "/var/lib/docker/overlay2/684001100e946d98236433d1597d1b06b42baa4acd259ed12c86be27bb0d9280/diff",
	                "WorkDir": "/var/lib/docker/overlay2/684001100e946d98236433d1597d1b06b42baa4acd259ed12c86be27bb0d9280/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-472765",
	                "Source": "/var/lib/docker/volumes/addons-472765/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-472765",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-472765",
	                "name.minikube.sigs.k8s.io": "addons-472765",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "222ca727c98a21332782cceb950ff0b9d2c3329de55da156ef00f410c65e2244",
	            "SandboxKey": "/var/run/docker/netns/222ca727c98a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41303"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41304"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41307"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41305"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41306"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-472765": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b1176cc45ad97ce07a7f33b867ec19cf01fefe7bd3684a62d9c146be6eba5802",
	                    "EndpointID": "4f134dbbcb52ba9752b2738231a5c3e383b391a4bcce7697973f680cef81efc8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-472765",
	                        "d67f5a96ab25"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-472765 -n addons-472765
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-472765 logs -n 25: (1.578120122s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-862665   | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC |                     |
	|         | -p download-only-862665              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	| delete  | -p download-only-862665              | download-only-862665   | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	| start   | -o=json --download-only              | download-only-833953   | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC |                     |
	|         | -p download-only-833953              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	| delete  | -p download-only-833953              | download-only-833953   | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	| delete  | -p download-only-862665              | download-only-862665   | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	| delete  | -p download-only-833953              | download-only-833953   | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	| start   | --download-only -p                   | download-docker-902924 | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC |                     |
	|         | download-docker-902924               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-902924            | download-docker-902924 | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	| start   | --download-only -p                   | binary-mirror-263235   | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC |                     |
	|         | binary-mirror-263235                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41507               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-263235              | binary-mirror-263235   | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	| addons  | enable dashboard -p                  | addons-472765          | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC |                     |
	|         | addons-472765                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-472765          | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC |                     |
	|         | addons-472765                        |                        |         |         |                     |                     |
	| start   | -p addons-472765 --wait=true         | addons-472765          | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:27 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:24:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:24:22.066251 2544921 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:24:22.066372 2544921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:24:22.066382 2544921 out.go:358] Setting ErrFile to fd 2...
	I0930 10:24:22.066387 2544921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:24:22.066612 2544921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	I0930 10:24:22.067091 2544921 out.go:352] Setting JSON to false
	I0930 10:24:22.067956 2544921 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":151610,"bootTime":1727540252,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0930 10:24:22.068030 2544921 start.go:139] virtualization:  
	I0930 10:24:22.071319 2544921 out.go:177] * [addons-472765] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 10:24:22.074877 2544921 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:24:22.074968 2544921 notify.go:220] Checking for updates...
	I0930 10:24:22.080139 2544921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:24:22.082641 2544921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 10:24:22.085528 2544921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	I0930 10:24:22.088193 2544921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 10:24:22.090853 2544921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:24:22.093883 2544921 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:24:22.116483 2544921 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:24:22.116614 2544921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:24:22.167748 2544921 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:24:22.157194979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:24:22.167907 2544921 docker.go:318] overlay module found
	I0930 10:24:22.172586 2544921 out.go:177] * Using the docker driver based on user configuration
	I0930 10:24:22.175119 2544921 start.go:297] selected driver: docker
	I0930 10:24:22.175138 2544921 start.go:901] validating driver "docker" against <nil>
	I0930 10:24:22.175153 2544921 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:24:22.175886 2544921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:24:22.233539 2544921 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:24:22.223979781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:24:22.233802 2544921 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:24:22.234128 2544921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:24:22.236865 2544921 out.go:177] * Using Docker driver with root privileges
	I0930 10:24:22.242917 2544921 cni.go:84] Creating CNI manager for ""
	I0930 10:24:22.242988 2544921 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0930 10:24:22.242999 2544921 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 10:24:22.243077 2544921 start.go:340] cluster config:
	{Name:addons-472765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-472765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:24:22.251543 2544921 out.go:177] * Starting "addons-472765" primary control-plane node in "addons-472765" cluster
	I0930 10:24:22.253269 2544921 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0930 10:24:22.255317 2544921 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0930 10:24:22.257073 2544921 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0930 10:24:22.257142 2544921 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0930 10:24:22.257160 2544921 cache.go:56] Caching tarball of preloaded images
	I0930 10:24:22.257176 2544921 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0930 10:24:22.257245 2544921 preload.go:172] Found /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 10:24:22.257255 2544921 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0930 10:24:22.257595 2544921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/config.json ...
	I0930 10:24:22.257654 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/config.json: {Name:mkd8eb68a2b47b69aa28df3fbffb37e07951a62b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:22.272280 2544921 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:24:22.272391 2544921 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0930 10:24:22.272414 2544921 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0930 10:24:22.272420 2544921 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0930 10:24:22.272428 2544921 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0930 10:24:22.272437 2544921 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0930 10:24:39.281435 2544921 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0930 10:24:39.281474 2544921 cache.go:194] Successfully downloaded all kic artifacts
	I0930 10:24:39.281515 2544921 start.go:360] acquireMachinesLock for addons-472765: {Name:mk06304ca928d3d6def667e4fd980b73911c93eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 10:24:39.281633 2544921 start.go:364] duration metric: took 95.499µs to acquireMachinesLock for "addons-472765"
	I0930 10:24:39.281664 2544921 start.go:93] Provisioning new machine with config: &{Name:addons-472765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-472765 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0930 10:24:39.281749 2544921 start.go:125] createHost starting for "" (driver="docker")
	I0930 10:24:39.285409 2544921 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0930 10:24:39.285666 2544921 start.go:159] libmachine.API.Create for "addons-472765" (driver="docker")
	I0930 10:24:39.285703 2544921 client.go:168] LocalClient.Create starting
	I0930 10:24:39.285818 2544921 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem
	I0930 10:24:40.201495 2544921 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/cert.pem
	I0930 10:24:40.560490 2544921 cli_runner.go:164] Run: docker network inspect addons-472765 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0930 10:24:40.575490 2544921 cli_runner.go:211] docker network inspect addons-472765 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0930 10:24:40.575586 2544921 network_create.go:284] running [docker network inspect addons-472765] to gather additional debugging logs...
	I0930 10:24:40.575633 2544921 cli_runner.go:164] Run: docker network inspect addons-472765
	W0930 10:24:40.591233 2544921 cli_runner.go:211] docker network inspect addons-472765 returned with exit code 1
	I0930 10:24:40.591265 2544921 network_create.go:287] error running [docker network inspect addons-472765]: docker network inspect addons-472765: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-472765 not found
	I0930 10:24:40.591278 2544921 network_create.go:289] output of [docker network inspect addons-472765]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-472765 not found
	
	** /stderr **
	I0930 10:24:40.591389 2544921 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:24:40.610187 2544921 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017c8ca0}
	I0930 10:24:40.610230 2544921 network_create.go:124] attempt to create docker network addons-472765 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0930 10:24:40.610289 2544921 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-472765 addons-472765
	I0930 10:24:40.683146 2544921 network_create.go:108] docker network addons-472765 192.168.49.0/24 created
	I0930 10:24:40.683176 2544921 kic.go:121] calculated static IP "192.168.49.2" for the "addons-472765" container
	I0930 10:24:40.683267 2544921 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0930 10:24:40.698452 2544921 cli_runner.go:164] Run: docker volume create addons-472765 --label name.minikube.sigs.k8s.io=addons-472765 --label created_by.minikube.sigs.k8s.io=true
	I0930 10:24:40.716309 2544921 oci.go:103] Successfully created a docker volume addons-472765
	I0930 10:24:40.716407 2544921 cli_runner.go:164] Run: docker run --rm --name addons-472765-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-472765 --entrypoint /usr/bin/test -v addons-472765:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0930 10:24:42.737157 2544921 cli_runner.go:217] Completed: docker run --rm --name addons-472765-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-472765 --entrypoint /usr/bin/test -v addons-472765:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.020697153s)
	I0930 10:24:42.737191 2544921 oci.go:107] Successfully prepared a docker volume addons-472765
	I0930 10:24:42.737216 2544921 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0930 10:24:42.737236 2544921 kic.go:194] Starting extracting preloaded images to volume ...
	I0930 10:24:42.737308 2544921 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-472765:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0930 10:24:46.676196 2544921 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-472765:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.938840279s)
	I0930 10:24:46.676230 2544921 kic.go:203] duration metric: took 3.938990029s to extract preloaded images to volume ...
	W0930 10:24:46.676405 2544921 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0930 10:24:46.676525 2544921 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0930 10:24:46.720048 2544921 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-472765 --name addons-472765 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-472765 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-472765 --network addons-472765 --ip 192.168.49.2 --volume addons-472765:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0930 10:24:47.015237 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Running}}
	I0930 10:24:47.039305 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:24:47.068752 2544921 cli_runner.go:164] Run: docker exec addons-472765 stat /var/lib/dpkg/alternatives/iptables
	I0930 10:24:47.119806 2544921 oci.go:144] the created container "addons-472765" has a running status.
	I0930 10:24:47.119839 2544921 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa...
	I0930 10:24:47.265119 2544921 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0930 10:24:47.296831 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:24:47.319867 2544921 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0930 10:24:47.319891 2544921 kic_runner.go:114] Args: [docker exec --privileged addons-472765 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0930 10:24:47.396755 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:24:47.419803 2544921 machine.go:93] provisionDockerMachine start ...
	I0930 10:24:47.419899 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:24:47.448339 2544921 main.go:141] libmachine: Using SSH client type: native
	I0930 10:24:47.448606 2544921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41303 <nil> <nil>}
	I0930 10:24:47.448623 2544921 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 10:24:47.449410 2544921 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52314->127.0.0.1:41303: read: connection reset by peer
	I0930 10:24:50.583114 2544921 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-472765
	
	I0930 10:24:50.583181 2544921 ubuntu.go:169] provisioning hostname "addons-472765"
	I0930 10:24:50.583262 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:24:50.600216 2544921 main.go:141] libmachine: Using SSH client type: native
	I0930 10:24:50.600491 2544921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41303 <nil> <nil>}
	I0930 10:24:50.600509 2544921 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-472765 && echo "addons-472765" | sudo tee /etc/hostname
	I0930 10:24:50.739003 2544921 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-472765
	
	I0930 10:24:50.739087 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:24:50.756601 2544921 main.go:141] libmachine: Using SSH client type: native
	I0930 10:24:50.756842 2544921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41303 <nil> <nil>}
	I0930 10:24:50.756865 2544921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-472765' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-472765/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-472765' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 10:24:50.887508 2544921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 10:24:50.887536 2544921 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19734-2538756/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-2538756/.minikube}
	I0930 10:24:50.887559 2544921 ubuntu.go:177] setting up certificates
	I0930 10:24:50.887569 2544921 provision.go:84] configureAuth start
	I0930 10:24:50.887650 2544921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-472765
	I0930 10:24:50.903835 2544921 provision.go:143] copyHostCerts
	I0930 10:24:50.903913 2544921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.pem (1078 bytes)
	I0930 10:24:50.904078 2544921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-2538756/.minikube/cert.pem (1123 bytes)
	I0930 10:24:50.904147 2544921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-2538756/.minikube/key.pem (1679 bytes)
	I0930 10:24:50.904198 2544921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca-key.pem org=jenkins.addons-472765 san=[127.0.0.1 192.168.49.2 addons-472765 localhost minikube]
	I0930 10:24:51.115548 2544921 provision.go:177] copyRemoteCerts
	I0930 10:24:51.115641 2544921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 10:24:51.115686 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:24:51.132540 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:24:51.224343 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 10:24:51.247596 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0930 10:24:51.270837 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 10:24:51.294152 2544921 provision.go:87] duration metric: took 406.560035ms to configureAuth
	I0930 10:24:51.294180 2544921 ubuntu.go:193] setting minikube options for container-runtime
	I0930 10:24:51.294368 2544921 config.go:182] Loaded profile config "addons-472765": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 10:24:51.294390 2544921 machine.go:96] duration metric: took 3.874554649s to provisionDockerMachine
	I0930 10:24:51.294398 2544921 client.go:171] duration metric: took 12.008685025s to LocalClient.Create
	I0930 10:24:51.294417 2544921 start.go:167] duration metric: took 12.008751363s to libmachine.API.Create "addons-472765"
	I0930 10:24:51.294428 2544921 start.go:293] postStartSetup for "addons-472765" (driver="docker")
	I0930 10:24:51.294438 2544921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 10:24:51.294488 2544921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 10:24:51.294531 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:24:51.310495 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:24:51.404484 2544921 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 10:24:51.407563 2544921 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0930 10:24:51.407633 2544921 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0930 10:24:51.407648 2544921 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0930 10:24:51.407665 2544921 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0930 10:24:51.407675 2544921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-2538756/.minikube/addons for local assets ...
	I0930 10:24:51.407742 2544921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-2538756/.minikube/files for local assets ...
	I0930 10:24:51.407769 2544921 start.go:296] duration metric: took 113.335182ms for postStartSetup
	I0930 10:24:51.408099 2544921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-472765
	I0930 10:24:51.424554 2544921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/config.json ...
	I0930 10:24:51.424840 2544921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:24:51.424888 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:24:51.441904 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:24:51.532451 2544921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0930 10:24:51.536920 2544921 start.go:128] duration metric: took 12.255155936s to createHost
	I0930 10:24:51.536947 2544921 start.go:83] releasing machines lock for "addons-472765", held for 12.255299081s
	I0930 10:24:51.537018 2544921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-472765
	I0930 10:24:51.554448 2544921 ssh_runner.go:195] Run: cat /version.json
	I0930 10:24:51.554506 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:24:51.554776 2544921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 10:24:51.554845 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:24:51.577242 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:24:51.592358 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:24:51.671191 2544921 ssh_runner.go:195] Run: systemctl --version
	I0930 10:24:51.797429 2544921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 10:24:51.802032 2544921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0930 10:24:51.826585 2544921 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0930 10:24:51.826679 2544921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 10:24:51.856367 2544921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0930 10:24:51.856399 2544921 start.go:495] detecting cgroup driver to use...
	I0930 10:24:51.856448 2544921 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 10:24:51.856502 2544921 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0930 10:24:51.869003 2544921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 10:24:51.880992 2544921 docker.go:217] disabling cri-docker service (if available) ...
	I0930 10:24:51.881088 2544921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 10:24:51.895207 2544921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 10:24:51.910068 2544921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 10:24:52.001368 2544921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 10:24:52.095102 2544921 docker.go:233] disabling docker service ...
	I0930 10:24:52.095191 2544921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 10:24:52.115247 2544921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 10:24:52.128747 2544921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 10:24:52.228572 2544921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 10:24:52.317381 2544921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 10:24:52.328868 2544921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 10:24:52.345168 2544921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0930 10:24:52.355333 2544921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0930 10:24:52.365002 2544921 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0930 10:24:52.365067 2544921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0930 10:24:52.375167 2544921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 10:24:52.384706 2544921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0930 10:24:52.393965 2544921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 10:24:52.403706 2544921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 10:24:52.412828 2544921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0930 10:24:52.422653 2544921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0930 10:24:52.432507 2544921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0930 10:24:52.442315 2544921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 10:24:52.451121 2544921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 10:24:52.459536 2544921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:24:52.548639 2544921 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0930 10:24:52.672161 2544921 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0930 10:24:52.672299 2544921 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0930 10:24:52.676160 2544921 start.go:563] Will wait 60s for crictl version
	I0930 10:24:52.676269 2544921 ssh_runner.go:195] Run: which crictl
	I0930 10:24:52.679562 2544921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 10:24:52.713314 2544921 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0930 10:24:52.713432 2544921 ssh_runner.go:195] Run: containerd --version
	I0930 10:24:52.743971 2544921 ssh_runner.go:195] Run: containerd --version
	I0930 10:24:52.770342 2544921 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0930 10:24:52.772299 2544921 cli_runner.go:164] Run: docker network inspect addons-472765 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:24:52.787200 2544921 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0930 10:24:52.790802 2544921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:24:52.801633 2544921 kubeadm.go:883] updating cluster {Name:addons-472765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-472765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 10:24:52.801760 2544921 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0930 10:24:52.801825 2544921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 10:24:52.836584 2544921 containerd.go:627] all images are preloaded for containerd runtime.
	I0930 10:24:52.836608 2544921 containerd.go:534] Images already preloaded, skipping extraction
	I0930 10:24:52.836668 2544921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 10:24:52.872444 2544921 containerd.go:627] all images are preloaded for containerd runtime.
	I0930 10:24:52.872469 2544921 cache_images.go:84] Images are preloaded, skipping loading
	I0930 10:24:52.872477 2544921 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0930 10:24:52.872631 2544921 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-472765 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-472765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 10:24:52.872704 2544921 ssh_runner.go:195] Run: sudo crictl info
	I0930 10:24:52.911080 2544921 cni.go:84] Creating CNI manager for ""
	I0930 10:24:52.911106 2544921 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0930 10:24:52.911121 2544921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 10:24:52.911144 2544921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-472765 NodeName:addons-472765 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 10:24:52.911291 2544921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-472765"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 10:24:52.911367 2544921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 10:24:52.920613 2544921 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 10:24:52.920683 2544921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 10:24:52.929914 2544921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0930 10:24:52.948215 2544921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 10:24:52.966440 2544921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0930 10:24:52.984691 2544921 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0930 10:24:52.988249 2544921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:24:52.998618 2544921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:24:53.090503 2544921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:24:53.104626 2544921 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765 for IP: 192.168.49.2
	I0930 10:24:53.104649 2544921 certs.go:194] generating shared ca certs ...
	I0930 10:24:53.104673 2544921 certs.go:226] acquiring lock for ca certs: {Name:mkff6faeb681279e5ac456a1e9fb9c9dcac2d430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:53.105456 2544921 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.key
	I0930 10:24:53.641411 2544921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.crt ...
	I0930 10:24:53.641448 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.crt: {Name:mk8f57e961c3a212304747d0ba8d64179ce12fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:53.641652 2544921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.key ...
	I0930 10:24:53.641665 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.key: {Name:mk2d8afa103bef397d1854ea58482fae45dfc778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:53.642473 2544921 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.key
	I0930 10:24:53.945885 2544921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.crt ...
	I0930 10:24:53.945916 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.crt: {Name:mk9c74bae5ed24a444237e0b4db26a19a7e010d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:53.946112 2544921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.key ...
	I0930 10:24:53.946124 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.key: {Name:mk42bb6b74e399419c02410ff2bf9757dcb5bd13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:53.946930 2544921 certs.go:256] generating profile certs ...
	I0930 10:24:53.947000 2544921 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.key
	I0930 10:24:53.947019 2544921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt with IP's: []
	I0930 10:24:54.472628 2544921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt ...
	I0930 10:24:54.472662 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: {Name:mkab64cb2e90e8bea14442fe03c89ebe02eb5ce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:54.473470 2544921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.key ...
	I0930 10:24:54.473489 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.key: {Name:mkdb8c967126e0e1ffe87e2924c20ce7a8d904a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:54.473596 2544921 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.key.68cfd8f6
	I0930 10:24:54.473618 2544921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.crt.68cfd8f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0930 10:24:55.187503 2544921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.crt.68cfd8f6 ...
	I0930 10:24:55.187534 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.crt.68cfd8f6: {Name:mk232301f2d9595266b876356530f1c7d63392a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:55.188280 2544921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.key.68cfd8f6 ...
	I0930 10:24:55.188298 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.key.68cfd8f6: {Name:mk8ce6de07b36668d28fab11638dfbaf515bcdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:55.188923 2544921 certs.go:381] copying /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.crt.68cfd8f6 -> /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.crt
	I0930 10:24:55.189011 2544921 certs.go:385] copying /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.key.68cfd8f6 -> /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.key
	I0930 10:24:55.189067 2544921 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/proxy-client.key
	I0930 10:24:55.189088 2544921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/proxy-client.crt with IP's: []
	I0930 10:24:55.729778 2544921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/proxy-client.crt ...
	I0930 10:24:55.729814 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/proxy-client.crt: {Name:mk63ea775fee3dde510463e29504fe112f109a8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:55.730015 2544921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/proxy-client.key ...
	I0930 10:24:55.730030 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/proxy-client.key: {Name:mke04a57e9d4ed97e9c663ef54dde3529dbdcdf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:24:55.730833 2544921 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 10:24:55.730880 2544921 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem (1078 bytes)
	I0930 10:24:55.730910 2544921 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/cert.pem (1123 bytes)
	I0930 10:24:55.730946 2544921 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/key.pem (1679 bytes)
	I0930 10:24:55.731530 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 10:24:55.756514 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 10:24:55.781858 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 10:24:55.805973 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 10:24:55.829563 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 10:24:55.853615 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 10:24:55.877925 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 10:24:55.901575 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 10:24:55.925600 2544921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 10:24:55.949394 2544921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 10:24:55.968105 2544921 ssh_runner.go:195] Run: openssl version
	I0930 10:24:55.973585 2544921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 10:24:55.983411 2544921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:24:55.986920 2544921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:24:55.986985 2544921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:24:55.993925 2544921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 10:24:56.013528 2544921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 10:24:56.017641 2544921 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 10:24:56.017697 2544921 kubeadm.go:392] StartCluster: {Name:addons-472765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-472765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:24:56.017797 2544921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0930 10:24:56.017857 2544921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 10:24:56.057384 2544921 cri.go:89] found id: ""
	I0930 10:24:56.057463 2544921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 10:24:56.067021 2544921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 10:24:56.076494 2544921 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0930 10:24:56.076588 2544921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 10:24:56.085805 2544921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 10:24:56.085832 2544921 kubeadm.go:157] found existing configuration files:
	
	I0930 10:24:56.085887 2544921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 10:24:56.095006 2544921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 10:24:56.095074 2544921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 10:24:56.103532 2544921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 10:24:56.112339 2544921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 10:24:56.112433 2544921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 10:24:56.121193 2544921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 10:24:56.130042 2544921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 10:24:56.130157 2544921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 10:24:56.139008 2544921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 10:24:56.148358 2544921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 10:24:56.148437 2544921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 10:24:56.156970 2544921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0930 10:24:56.199166 2544921 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 10:24:56.199516 2544921 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 10:24:56.233396 2544921 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0930 10:24:56.233473 2544921 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0930 10:24:56.233517 2544921 kubeadm.go:310] OS: Linux
	I0930 10:24:56.233568 2544921 kubeadm.go:310] CGROUPS_CPU: enabled
	I0930 10:24:56.233620 2544921 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0930 10:24:56.233670 2544921 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0930 10:24:56.233721 2544921 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0930 10:24:56.233772 2544921 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0930 10:24:56.233826 2544921 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0930 10:24:56.233874 2544921 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0930 10:24:56.233928 2544921 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0930 10:24:56.233978 2544921 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0930 10:24:56.298945 2544921 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 10:24:56.299058 2544921 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 10:24:56.299157 2544921 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 10:24:56.304055 2544921 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 10:24:56.307288 2544921 out.go:235]   - Generating certificates and keys ...
	I0930 10:24:56.307431 2544921 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 10:24:56.307512 2544921 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 10:24:56.545676 2544921 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 10:24:57.057542 2544921 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 10:24:57.298045 2544921 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 10:24:57.702869 2544921 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 10:24:57.955945 2544921 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 10:24:57.956279 2544921 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-472765 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:24:58.851076 2544921 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 10:24:58.851441 2544921 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-472765 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:24:59.176754 2544921 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 10:24:59.621029 2544921 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 10:25:00.673987 2544921 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 10:25:00.674062 2544921 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 10:25:00.868888 2544921 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 10:25:01.183583 2544921 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 10:25:01.435006 2544921 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 10:25:01.612802 2544921 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 10:25:02.155938 2544921 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 10:25:02.156740 2544921 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 10:25:02.160149 2544921 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 10:25:02.162403 2544921 out.go:235]   - Booting up control plane ...
	I0930 10:25:02.162524 2544921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 10:25:02.163255 2544921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 10:25:02.164762 2544921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 10:25:02.178770 2544921 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 10:25:02.185374 2544921 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 10:25:02.185435 2544921 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 10:25:02.288112 2544921 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 10:25:02.288232 2544921 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 10:25:04.286638 2544921 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001399547s
	I0930 10:25:04.286733 2544921 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 10:25:10.288357 2544921 kubeadm.go:310] [api-check] The API server is healthy after 6.001637804s
	I0930 10:25:10.310647 2544921 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 10:25:10.323210 2544921 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 10:25:10.347999 2544921 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 10:25:10.348211 2544921 kubeadm.go:310] [mark-control-plane] Marking the node addons-472765 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 10:25:10.359481 2544921 kubeadm.go:310] [bootstrap-token] Using token: 3nawq9.30lf3k6l1ttamj8y
	I0930 10:25:10.362535 2544921 out.go:235]   - Configuring RBAC rules ...
	I0930 10:25:10.362677 2544921 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 10:25:10.369761 2544921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 10:25:10.377736 2544921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 10:25:10.381560 2544921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 10:25:10.385159 2544921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 10:25:10.388794 2544921 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 10:25:10.695846 2544921 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 10:25:11.132169 2544921 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 10:25:11.695193 2544921 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 10:25:11.696493 2544921 kubeadm.go:310] 
	I0930 10:25:11.696571 2544921 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 10:25:11.696578 2544921 kubeadm.go:310] 
	I0930 10:25:11.696655 2544921 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 10:25:11.696661 2544921 kubeadm.go:310] 
	I0930 10:25:11.696686 2544921 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 10:25:11.696802 2544921 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 10:25:11.696860 2544921 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 10:25:11.696872 2544921 kubeadm.go:310] 
	I0930 10:25:11.696928 2544921 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 10:25:11.696936 2544921 kubeadm.go:310] 
	I0930 10:25:11.696988 2544921 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 10:25:11.696997 2544921 kubeadm.go:310] 
	I0930 10:25:11.697049 2544921 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 10:25:11.697153 2544921 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 10:25:11.697228 2544921 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 10:25:11.697233 2544921 kubeadm.go:310] 
	I0930 10:25:11.697317 2544921 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 10:25:11.697392 2544921 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 10:25:11.697397 2544921 kubeadm.go:310] 
	I0930 10:25:11.697480 2544921 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3nawq9.30lf3k6l1ttamj8y \
	I0930 10:25:11.697581 2544921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5aa63cba6be4f1230c7d64107d314f9c209c586b197f250cf9c04171ce523f01 \
	I0930 10:25:11.697603 2544921 kubeadm.go:310] 	--control-plane 
	I0930 10:25:11.697607 2544921 kubeadm.go:310] 
	I0930 10:25:11.697690 2544921 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 10:25:11.697695 2544921 kubeadm.go:310] 
	I0930 10:25:11.697775 2544921 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3nawq9.30lf3k6l1ttamj8y \
	I0930 10:25:11.697880 2544921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5aa63cba6be4f1230c7d64107d314f9c209c586b197f250cf9c04171ce523f01 
	I0930 10:25:11.701006 2544921 kubeadm.go:310] W0930 10:24:56.195697    1027 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:25:11.701304 2544921 kubeadm.go:310] W0930 10:24:56.196725    1027 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:25:11.701518 2544921 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0930 10:25:11.701626 2544921 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 10:25:11.701645 2544921 cni.go:84] Creating CNI manager for ""
	I0930 10:25:11.701656 2544921 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0930 10:25:11.703662 2544921 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0930 10:25:11.705438 2544921 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0930 10:25:11.710213 2544921 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0930 10:25:11.710236 2544921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0930 10:25:11.731810 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0930 10:25:11.999215 2544921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 10:25:11.999315 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:25:11.999355 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-472765 minikube.k8s.io/updated_at=2024_09_30T10_25_11_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=addons-472765 minikube.k8s.io/primary=true
	I0930 10:25:12.161742 2544921 ops.go:34] apiserver oom_adj: -16
	I0930 10:25:12.161850 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:25:12.662067 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:25:13.161966 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:25:13.662137 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:25:14.162620 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:25:14.662134 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:25:15.162440 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:25:15.662086 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:25:16.162278 2544921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:25:16.290384 2544921 kubeadm.go:1113] duration metric: took 4.291134312s to wait for elevateKubeSystemPrivileges
	I0930 10:25:16.290412 2544921 kubeadm.go:394] duration metric: took 20.272721488s to StartCluster
	I0930 10:25:16.290429 2544921 settings.go:142] acquiring lock: {Name:mkc704d8ddfae8fa577b296109d2f74f59988133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:25:16.291204 2544921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 10:25:16.291570 2544921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/kubeconfig: {Name:mk7f607d1d45d210ea4523c0a214397b48972e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:25:16.292349 2544921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 10:25:16.292416 2544921 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0930 10:25:16.292599 2544921 config.go:182] Loaded profile config "addons-472765": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 10:25:16.292633 2544921 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 10:25:16.292703 2544921 addons.go:69] Setting yakd=true in profile "addons-472765"
	I0930 10:25:16.292727 2544921 addons.go:234] Setting addon yakd=true in "addons-472765"
	I0930 10:25:16.292751 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.292836 2544921 addons.go:69] Setting inspektor-gadget=true in profile "addons-472765"
	I0930 10:25:16.292870 2544921 addons.go:234] Setting addon inspektor-gadget=true in "addons-472765"
	I0930 10:25:16.292921 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.293224 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.293690 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.293720 2544921 addons.go:69] Setting volumesnapshots=true in profile "addons-472765"
	I0930 10:25:16.293745 2544921 addons.go:234] Setting addon volumesnapshots=true in "addons-472765"
	I0930 10:25:16.293773 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.294191 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.296478 2544921 out.go:177] * Verifying Kubernetes components...
	I0930 10:25:16.296818 2544921 addons.go:69] Setting ingress=true in profile "addons-472765"
	I0930 10:25:16.296845 2544921 addons.go:234] Setting addon ingress=true in "addons-472765"
	I0930 10:25:16.296888 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.297358 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.297915 2544921 addons.go:69] Setting ingress-dns=true in profile "addons-472765"
	I0930 10:25:16.297945 2544921 addons.go:234] Setting addon ingress-dns=true in "addons-472765"
	I0930 10:25:16.297982 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.298429 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.301422 2544921 addons.go:69] Setting cloud-spanner=true in profile "addons-472765"
	I0930 10:25:16.301453 2544921 addons.go:234] Setting addon cloud-spanner=true in "addons-472765"
	I0930 10:25:16.301496 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.301959 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.293692 2544921 addons.go:69] Setting metrics-server=true in profile "addons-472765"
	I0930 10:25:16.303954 2544921 addons.go:234] Setting addon metrics-server=true in "addons-472765"
	I0930 10:25:16.304002 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.304470 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.293700 2544921 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-472765"
	I0930 10:25:16.308989 2544921 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-472765"
	I0930 10:25:16.309029 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.309498 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.314419 2544921 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-472765"
	I0930 10:25:16.314548 2544921 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-472765"
	I0930 10:25:16.314613 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.315150 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.293706 2544921 addons.go:69] Setting registry=true in profile "addons-472765"
	I0930 10:25:16.325383 2544921 addons.go:234] Setting addon registry=true in "addons-472765"
	I0930 10:25:16.325437 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.325908 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.293709 2544921 addons.go:69] Setting storage-provisioner=true in profile "addons-472765"
	I0930 10:25:16.339870 2544921 addons.go:234] Setting addon storage-provisioner=true in "addons-472765"
	I0930 10:25:16.339918 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.340399 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.340590 2544921 addons.go:69] Setting default-storageclass=true in profile "addons-472765"
	I0930 10:25:16.340622 2544921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-472765"
	I0930 10:25:16.340930 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.367670 2544921 addons.go:69] Setting gcp-auth=true in profile "addons-472765"
	I0930 10:25:16.367754 2544921 mustload.go:65] Loading cluster: addons-472765
	I0930 10:25:16.367978 2544921 config.go:182] Loaded profile config "addons-472765": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 10:25:16.368310 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.293713 2544921 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-472765"
	I0930 10:25:16.377722 2544921 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-472765"
	I0930 10:25:16.378082 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.379556 2544921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:25:16.293717 2544921 addons.go:69] Setting volcano=true in profile "addons-472765"
	I0930 10:25:16.392900 2544921 addons.go:234] Setting addon volcano=true in "addons-472765"
	I0930 10:25:16.392944 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.393522 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.479302 2544921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0930 10:25:16.479418 2544921 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 10:25:16.479440 2544921 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 10:25:16.484747 2544921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:25:16.484868 2544921 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 10:25:16.484895 2544921 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 10:25:16.484994 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.485237 2544921 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 10:25:16.485274 2544921 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 10:25:16.485344 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.511977 2544921 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0930 10:25:16.512152 2544921 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 10:25:16.512249 2544921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:25:16.534764 2544921 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:25:16.534821 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0930 10:25:16.534922 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.535109 2544921 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 10:25:16.538984 2544921 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:25:16.539066 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 10:25:16.539162 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.550454 2544921 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 10:25:16.551352 2544921 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:25:16.551373 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0930 10:25:16.551444 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.557124 2544921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 10:25:16.562410 2544921 addons.go:234] Setting addon default-storageclass=true in "addons-472765"
	I0930 10:25:16.562449 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.562886 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.563076 2544921 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 10:25:16.534046 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.569887 2544921 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-472765"
	I0930 10:25:16.569977 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:16.570479 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:16.585112 2544921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 10:25:16.585186 2544921 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 10:25:16.585298 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.593727 2544921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:25:16.593749 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 10:25:16.593809 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.602437 2544921 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 10:25:16.604319 2544921 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 10:25:16.609210 2544921 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 10:25:16.609235 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 10:25:16.609301 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.612655 2544921 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 10:25:16.612677 2544921 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 10:25:16.612744 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.636075 2544921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 10:25:16.639789 2544921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 10:25:16.642707 2544921 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 10:25:16.642729 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 10:25:16.642795 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.645503 2544921 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 10:25:16.647832 2544921 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 10:25:16.649567 2544921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 10:25:16.654000 2544921 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0930 10:25:16.654293 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.678745 2544921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 10:25:16.685424 2544921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0930 10:25:16.687452 2544921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 10:25:16.692283 2544921 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 10:25:16.692311 2544921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 10:25:16.692393 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.732385 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.743652 2544921 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0930 10:25:16.748477 2544921 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0930 10:25:16.754966 2544921 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0930 10:25:16.755081 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0930 10:25:16.755240 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.757036 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.794707 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.806333 2544921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 10:25:16.807655 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.808995 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.822656 2544921 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 10:25:16.824586 2544921 out.go:177]   - Using image docker.io/busybox:stable
	I0930 10:25:16.826702 2544921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:25:16.826722 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 10:25:16.826785 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.847936 2544921 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 10:25:16.847958 2544921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 10:25:16.848023 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:16.875152 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.875802 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.876697 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.887991 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.898530 2544921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:25:16.915918 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.931748 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.949919 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:16.955410 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	W0930 10:25:16.956683 2544921 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0930 10:25:16.956779 2544921 retry.go:31] will retry after 313.257325ms: ssh: handshake failed: EOF
	I0930 10:25:17.202256 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:25:17.224711 2544921 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 10:25:17.224806 2544921 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 10:25:17.269776 2544921 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 10:25:17.269847 2544921 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 10:25:17.335123 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:25:17.352513 2544921 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 10:25:17.352588 2544921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 10:25:17.356623 2544921 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 10:25:17.356706 2544921 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 10:25:17.358124 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:25:17.429596 2544921 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 10:25:17.429675 2544921 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 10:25:17.432704 2544921 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 10:25:17.432775 2544921 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 10:25:17.460696 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:25:17.460960 2544921 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 10:25:17.460975 2544921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 10:25:17.471518 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:25:17.486516 2544921 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:25:17.486594 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 10:25:17.492452 2544921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 10:25:17.492525 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 10:25:17.512115 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0930 10:25:17.513213 2544921 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 10:25:17.513260 2544921 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 10:25:17.515545 2544921 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 10:25:17.515650 2544921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 10:25:17.546570 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 10:25:17.606879 2544921 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 10:25:17.606956 2544921 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 10:25:17.617871 2544921 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 10:25:17.617951 2544921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 10:25:17.642453 2544921 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 10:25:17.642527 2544921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 10:25:17.695508 2544921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 10:25:17.695583 2544921 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 10:25:17.751932 2544921 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 10:25:17.751953 2544921 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 10:25:17.754275 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:25:17.780207 2544921 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:25:17.780279 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 10:25:17.798907 2544921 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 10:25:17.798981 2544921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 10:25:17.881999 2544921 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 10:25:17.882071 2544921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 10:25:17.913800 2544921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:25:17.913877 2544921 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 10:25:17.962248 2544921 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 10:25:17.962325 2544921 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 10:25:17.974366 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:25:18.002614 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 10:25:18.007279 2544921 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 10:25:18.007390 2544921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 10:25:18.040901 2544921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 10:25:18.040979 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 10:25:18.108504 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:25:18.134105 2544921 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 10:25:18.134191 2544921 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 10:25:18.216045 2544921 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.409676217s)
	I0930 10:25:18.216072 2544921 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0930 10:25:18.217166 2544921 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.318602953s)
	I0930 10:25:18.217840 2544921 node_ready.go:35] waiting up to 6m0s for node "addons-472765" to be "Ready" ...
	I0930 10:25:18.220817 2544921 node_ready.go:49] node "addons-472765" has status "Ready":"True"
	I0930 10:25:18.220885 2544921 node_ready.go:38] duration metric: took 3.027668ms for node "addons-472765" to be "Ready" ...
	I0930 10:25:18.220923 2544921 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:25:18.230415 2544921 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6l42" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:18.329839 2544921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 10:25:18.329921 2544921 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 10:25:18.336578 2544921 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 10:25:18.336655 2544921 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 10:25:18.448141 2544921 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 10:25:18.448219 2544921 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 10:25:18.607681 2544921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 10:25:18.607768 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 10:25:18.638722 2544921 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:25:18.638795 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 10:25:18.698211 2544921 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:25:18.698285 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 10:25:18.720289 2544921 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-472765" context rescaled to 1 replicas
	I0930 10:25:18.743535 2544921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 10:25:18.743627 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 10:25:18.816697 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:25:19.013743 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:25:19.092902 2544921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:25:19.092923 2544921 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 10:25:19.233418 2544921 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-l6l42" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-l6l42" not found
	I0930 10:25:19.233492 2544921 pod_ready.go:82] duration metric: took 1.003008265s for pod "coredns-7c65d6cfc9-l6l42" in "kube-system" namespace to be "Ready" ...
	E0930 10:25:19.233518 2544921 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-l6l42" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-l6l42" not found
	I0930 10:25:19.233542 2544921 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xf8vk" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:19.482893 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:25:21.272199 2544921 pod_ready.go:103] pod "coredns-7c65d6cfc9-xf8vk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:25:23.740834 2544921 pod_ready.go:103] pod "coredns-7c65d6cfc9-xf8vk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:25:23.778029 2544921 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 10:25:23.778190 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:23.800937 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:24.219824 2544921 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 10:25:24.310760 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.108416248s)
	I0930 10:25:24.310797 2544921 addons.go:475] Verifying addon ingress=true in "addons-472765"
	I0930 10:25:24.310989 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.975794102s)
	I0930 10:25:24.311038 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.952862559s)
	I0930 10:25:24.311086 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.850373373s)
	I0930 10:25:24.311136 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.839551801s)
	I0930 10:25:24.313088 2544921 out.go:177] * Verifying ingress addon...
	I0930 10:25:24.315890 2544921 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0930 10:25:24.319679 2544921 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0930 10:25:24.319706 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:24.437735 2544921 addons.go:234] Setting addon gcp-auth=true in "addons-472765"
	I0930 10:25:24.437790 2544921 host.go:66] Checking if "addons-472765" exists ...
	I0930 10:25:24.438289 2544921 cli_runner.go:164] Run: docker container inspect addons-472765 --format={{.State.Status}}
	I0930 10:25:24.468449 2544921 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 10:25:24.468507 2544921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-472765
	I0930 10:25:24.503783 2544921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41303 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/addons-472765/id_rsa Username:docker}
	I0930 10:25:24.821404 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:25.322386 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:25.808566 2544921 pod_ready.go:103] pod "coredns-7c65d6cfc9-xf8vk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:25:25.887926 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:26.327161 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:26.582777 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.070588258s)
	I0930 10:25:26.582845 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.036205216s)
	I0930 10:25:26.582882 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.828547687s)
	I0930 10:25:26.582898 2544921 addons.go:475] Verifying addon registry=true in "addons-472765"
	I0930 10:25:26.583061 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.608620376s)
	I0930 10:25:26.583283 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.58059098s)
	I0930 10:25:26.583425 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.474842677s)
	I0930 10:25:26.583447 2544921 addons.go:475] Verifying addon metrics-server=true in "addons-472765"
	I0930 10:25:26.583529 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.766754172s)
	I0930 10:25:26.583626 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.569856388s)
	W0930 10:25:26.583646 2544921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:25:26.583660 2544921 retry.go:31] will retry after 311.88327ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:25:26.584960 2544921 out.go:177] * Verifying registry addon...
	I0930 10:25:26.586474 2544921 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-472765 service yakd-dashboard -n yakd-dashboard
	
	I0930 10:25:26.590642 2544921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0930 10:25:26.639341 2544921 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 10:25:26.639384 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:26.895851 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:25:26.898915 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:27.194720 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:27.323179 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:27.550870 2544921 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.082388618s)
	I0930 10:25:27.550984 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.068006692s)
	I0930 10:25:27.551335 2544921 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-472765"
	I0930 10:25:27.554159 2544921 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 10:25:27.554223 2544921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:25:27.556671 2544921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 10:25:27.558558 2544921 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 10:25:27.560565 2544921 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 10:25:27.560592 2544921 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 10:25:27.569256 2544921 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 10:25:27.569283 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:27.640007 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:27.681652 2544921 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 10:25:27.681694 2544921 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 10:25:27.714176 2544921 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:25:27.714205 2544921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 10:25:27.745925 2544921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:25:27.820393 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:28.066041 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:28.097454 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:28.245361 2544921 pod_ready.go:103] pod "coredns-7c65d6cfc9-xf8vk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:25:28.321893 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:28.562352 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:28.594732 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:28.740134 2544921 pod_ready.go:93] pod "coredns-7c65d6cfc9-xf8vk" in "kube-system" namespace has status "Ready":"True"
	I0930 10:25:28.740159 2544921 pod_ready.go:82] duration metric: took 9.506577921s for pod "coredns-7c65d6cfc9-xf8vk" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:28.740171 2544921 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-472765" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:28.745386 2544921 pod_ready.go:93] pod "etcd-addons-472765" in "kube-system" namespace has status "Ready":"True"
	I0930 10:25:28.745406 2544921 pod_ready.go:82] duration metric: took 5.228068ms for pod "etcd-addons-472765" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:28.745421 2544921 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-472765" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:28.750839 2544921 pod_ready.go:93] pod "kube-apiserver-addons-472765" in "kube-system" namespace has status "Ready":"True"
	I0930 10:25:28.750863 2544921 pod_ready.go:82] duration metric: took 5.435344ms for pod "kube-apiserver-addons-472765" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:28.750875 2544921 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-472765" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:28.759344 2544921 pod_ready.go:93] pod "kube-controller-manager-addons-472765" in "kube-system" namespace has status "Ready":"True"
	I0930 10:25:28.759371 2544921 pod_ready.go:82] duration metric: took 8.487577ms for pod "kube-controller-manager-addons-472765" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:28.759384 2544921 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xvdqn" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:28.764732 2544921 pod_ready.go:93] pod "kube-proxy-xvdqn" in "kube-system" namespace has status "Ready":"True"
	I0930 10:25:28.764755 2544921 pod_ready.go:82] duration metric: took 5.363902ms for pod "kube-proxy-xvdqn" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:28.764766 2544921 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-472765" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:28.821102 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:28.824055 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.928162884s)
	I0930 10:25:29.000941 2544921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.254966106s)
	I0930 10:25:29.004973 2544921 addons.go:475] Verifying addon gcp-auth=true in "addons-472765"
	I0930 10:25:29.008100 2544921 out.go:177] * Verifying gcp-auth addon...
	I0930 10:25:29.010916 2544921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 10:25:29.020815 2544921 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:25:29.119678 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:29.122839 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:29.137773 2544921 pod_ready.go:93] pod "kube-scheduler-addons-472765" in "kube-system" namespace has status "Ready":"True"
	I0930 10:25:29.137844 2544921 pod_ready.go:82] duration metric: took 373.069551ms for pod "kube-scheduler-addons-472765" in "kube-system" namespace to be "Ready" ...
	I0930 10:25:29.137868 2544921 pod_ready.go:39] duration metric: took 10.916914048s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:25:29.137914 2544921 api_server.go:52] waiting for apiserver process to appear ...
	I0930 10:25:29.138007 2544921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:25:29.152520 2544921 api_server.go:72] duration metric: took 12.8600687s to wait for apiserver process to appear ...
	I0930 10:25:29.152549 2544921 api_server.go:88] waiting for apiserver healthz status ...
	I0930 10:25:29.152572 2544921 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0930 10:25:29.160515 2544921 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0930 10:25:29.161580 2544921 api_server.go:141] control plane version: v1.31.1
	I0930 10:25:29.161615 2544921 api_server.go:131] duration metric: took 9.050757ms to wait for apiserver health ...
	I0930 10:25:29.161634 2544921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 10:25:29.321760 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:29.344056 2544921 system_pods.go:59] 18 kube-system pods found
	I0930 10:25:29.344093 2544921 system_pods.go:61] "coredns-7c65d6cfc9-xf8vk" [b2abaa8e-13d0-412c-8e5a-27b78d1bb7c6] Running
	I0930 10:25:29.344103 2544921 system_pods.go:61] "csi-hostpath-attacher-0" [293d6e66-35e1-4643-b028-d7e56bdce4ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 10:25:29.344116 2544921 system_pods.go:61] "csi-hostpath-resizer-0" [4c400230-0d7c-4c26-be77-c86d57501c2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 10:25:29.344124 2544921 system_pods.go:61] "csi-hostpathplugin-wl5pv" [fe2c8300-80bf-4381-8534-9f6212d2a2dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 10:25:29.344131 2544921 system_pods.go:61] "etcd-addons-472765" [c55ef515-f8db-42f0-80b5-ecf838636049] Running
	I0930 10:25:29.344136 2544921 system_pods.go:61] "kindnet-wjzdr" [4cfbb61d-358d-4462-8280-543a4115df93] Running
	I0930 10:25:29.344140 2544921 system_pods.go:61] "kube-apiserver-addons-472765" [0eb96773-3e74-4a8d-ad54-01ac672c8b5d] Running
	I0930 10:25:29.344144 2544921 system_pods.go:61] "kube-controller-manager-addons-472765" [6b63dae0-c26a-4bd8-a8d4-8379f6cdb499] Running
	I0930 10:25:29.344158 2544921 system_pods.go:61] "kube-ingress-dns-minikube" [cb4d8a6a-ac53-4cfd-890d-07683ad920cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0930 10:25:29.344169 2544921 system_pods.go:61] "kube-proxy-xvdqn" [6da656c3-e45a-4257-a513-09d5565027a5] Running
	I0930 10:25:29.344173 2544921 system_pods.go:61] "kube-scheduler-addons-472765" [a4ca6729-4c66-45bc-9b67-c02750066f54] Running
	I0930 10:25:29.344179 2544921 system_pods.go:61] "metrics-server-84c5f94fbc-8r8w8" [8e4539cd-6124-40d5-bcb5-a9852f6ac989] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 10:25:29.344186 2544921 system_pods.go:61] "nvidia-device-plugin-daemonset-dzrd4" [ea4e804f-a811-4b14-98c6-2dd6b0db0c84] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0930 10:25:29.344197 2544921 system_pods.go:61] "registry-66c9cd494c-tbdxk" [54bb36b5-2e90-4a96-b79b-47a74c25caa2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0930 10:25:29.344203 2544921 system_pods.go:61] "registry-proxy-xd6th" [1c60d130-5484-480e-83cf-7713c3c48f30] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 10:25:29.344214 2544921 system_pods.go:61] "snapshot-controller-56fcc65765-5ccvc" [7a86599d-4a45-440c-908a-c2de69156b6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:25:29.344224 2544921 system_pods.go:61] "snapshot-controller-56fcc65765-xhqvs" [eaf8016d-db85-46ba-b76b-2d66fefc0b2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:25:29.344228 2544921 system_pods.go:61] "storage-provisioner" [370fdec1-899d-48b2-bf4e-28d31e8bb596] Running
	I0930 10:25:29.344234 2544921 system_pods.go:74] duration metric: took 182.587407ms to wait for pod list to return data ...
	I0930 10:25:29.344245 2544921 default_sa.go:34] waiting for default service account to be created ...
	I0930 10:25:29.542252 2544921 default_sa.go:45] found service account: "default"
	I0930 10:25:29.542282 2544921 default_sa.go:55] duration metric: took 198.029341ms for default service account to be created ...
	I0930 10:25:29.542293 2544921 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 10:25:29.562927 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:29.594972 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:29.746200 2544921 system_pods.go:86] 18 kube-system pods found
	I0930 10:25:29.746239 2544921 system_pods.go:89] "coredns-7c65d6cfc9-xf8vk" [b2abaa8e-13d0-412c-8e5a-27b78d1bb7c6] Running
	I0930 10:25:29.746252 2544921 system_pods.go:89] "csi-hostpath-attacher-0" [293d6e66-35e1-4643-b028-d7e56bdce4ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 10:25:29.746261 2544921 system_pods.go:89] "csi-hostpath-resizer-0" [4c400230-0d7c-4c26-be77-c86d57501c2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 10:25:29.746269 2544921 system_pods.go:89] "csi-hostpathplugin-wl5pv" [fe2c8300-80bf-4381-8534-9f6212d2a2dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 10:25:29.746274 2544921 system_pods.go:89] "etcd-addons-472765" [c55ef515-f8db-42f0-80b5-ecf838636049] Running
	I0930 10:25:29.746279 2544921 system_pods.go:89] "kindnet-wjzdr" [4cfbb61d-358d-4462-8280-543a4115df93] Running
	I0930 10:25:29.746291 2544921 system_pods.go:89] "kube-apiserver-addons-472765" [0eb96773-3e74-4a8d-ad54-01ac672c8b5d] Running
	I0930 10:25:29.746295 2544921 system_pods.go:89] "kube-controller-manager-addons-472765" [6b63dae0-c26a-4bd8-a8d4-8379f6cdb499] Running
	I0930 10:25:29.746311 2544921 system_pods.go:89] "kube-ingress-dns-minikube" [cb4d8a6a-ac53-4cfd-890d-07683ad920cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0930 10:25:29.746315 2544921 system_pods.go:89] "kube-proxy-xvdqn" [6da656c3-e45a-4257-a513-09d5565027a5] Running
	I0930 10:25:29.746320 2544921 system_pods.go:89] "kube-scheduler-addons-472765" [a4ca6729-4c66-45bc-9b67-c02750066f54] Running
	I0930 10:25:29.746327 2544921 system_pods.go:89] "metrics-server-84c5f94fbc-8r8w8" [8e4539cd-6124-40d5-bcb5-a9852f6ac989] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 10:25:29.746338 2544921 system_pods.go:89] "nvidia-device-plugin-daemonset-dzrd4" [ea4e804f-a811-4b14-98c6-2dd6b0db0c84] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0930 10:25:29.746344 2544921 system_pods.go:89] "registry-66c9cd494c-tbdxk" [54bb36b5-2e90-4a96-b79b-47a74c25caa2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0930 10:25:29.746352 2544921 system_pods.go:89] "registry-proxy-xd6th" [1c60d130-5484-480e-83cf-7713c3c48f30] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 10:25:29.746363 2544921 system_pods.go:89] "snapshot-controller-56fcc65765-5ccvc" [7a86599d-4a45-440c-908a-c2de69156b6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:25:29.746378 2544921 system_pods.go:89] "snapshot-controller-56fcc65765-xhqvs" [eaf8016d-db85-46ba-b76b-2d66fefc0b2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:25:29.746383 2544921 system_pods.go:89] "storage-provisioner" [370fdec1-899d-48b2-bf4e-28d31e8bb596] Running
	I0930 10:25:29.746390 2544921 system_pods.go:126] duration metric: took 204.091559ms to wait for k8s-apps to be running ...
	I0930 10:25:29.746401 2544921 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 10:25:29.746460 2544921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:25:29.759899 2544921 system_svc.go:56] duration metric: took 13.485543ms WaitForService to wait for kubelet
	I0930 10:25:29.759928 2544921 kubeadm.go:582] duration metric: took 13.467483389s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:25:29.759951 2544921 node_conditions.go:102] verifying NodePressure condition ...
	I0930 10:25:29.820934 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:29.937813 2544921 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0930 10:25:29.937850 2544921 node_conditions.go:123] node cpu capacity is 2
	I0930 10:25:29.937866 2544921 node_conditions.go:105] duration metric: took 177.907216ms to run NodePressure ...
	I0930 10:25:29.937879 2544921 start.go:241] waiting for startup goroutines ...
	I0930 10:25:30.067573 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:30.098182 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:30.321829 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:30.563229 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:30.595161 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:30.822717 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:31.064645 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:31.095807 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:31.320658 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:31.618655 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:31.619232 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:31.820401 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:32.118454 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:32.119254 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:32.321270 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:32.563288 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:32.598236 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:32.821153 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:33.122982 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:33.123654 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:33.320821 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:33.562485 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:33.595696 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:33.823923 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:34.065117 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:34.094974 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:34.321052 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:34.562192 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:34.594902 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:34.820520 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:35.062102 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:35.095176 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:35.321303 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:35.561489 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:35.594842 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:35.820397 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:36.120782 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:36.122378 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:36.320303 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:36.561215 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:36.595024 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:36.820805 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:37.062065 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:37.096510 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:37.321789 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:37.563976 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:37.593932 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:37.823215 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:38.062228 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:38.095780 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:38.324022 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:38.561943 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:38.595019 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:38.820421 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:39.118854 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:39.119468 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:39.322772 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:39.561707 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:39.595220 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:39.820506 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:40.062888 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:40.095668 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:40.321347 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:40.561496 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:40.595063 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:40.826015 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:41.067960 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:41.095276 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:41.321963 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:41.562435 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:41.595283 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:41.821022 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:42.062475 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:42.099638 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:42.321665 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:42.562057 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:42.594590 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:42.821287 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:43.061130 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:43.094044 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:43.320908 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:43.561429 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:43.594800 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:43.819655 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:44.117752 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:44.119020 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:44.319946 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:44.561194 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:44.595092 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:44.821408 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:45.071835 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:45.097207 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:45.362499 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:45.566040 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:45.607290 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:45.819751 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:46.061450 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:46.095666 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:46.320755 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:46.561776 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:46.596223 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:46.820917 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:47.062618 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:47.094677 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:47.321539 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:47.616579 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:47.618027 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:47.820497 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:48.061957 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:48.095350 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:48.321488 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:48.562430 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:48.596674 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:48.820560 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:49.061208 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:49.094752 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:49.320115 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:49.562113 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:49.595233 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:49.821371 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:50.062719 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:50.094875 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:50.320525 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:50.561296 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:50.595496 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:50.819778 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:51.061582 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:51.095521 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:25:51.320622 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:51.562115 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:51.595678 2544921 kapi.go:107] duration metric: took 25.005031088s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 10:25:51.821196 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:52.065695 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:52.320731 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:52.562122 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:52.821384 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:53.062045 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:53.320264 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:53.562105 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:53.820272 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:54.061549 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:54.321055 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:54.562949 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:54.820364 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:55.062000 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:55.321167 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:55.561720 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:55.827059 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:56.061529 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:56.321148 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:56.564195 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:56.820501 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:57.065365 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:57.320989 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:57.618345 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:57.820632 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:58.062186 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:58.320642 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:58.563508 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:58.820462 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:59.067692 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:59.320897 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:25:59.561920 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:25:59.821876 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:00.142935 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:00.337324 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:00.622309 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:00.821225 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:01.062832 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:01.323270 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:01.562223 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:01.820125 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:02.061437 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:02.321311 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:02.563021 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:02.843854 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:03.062805 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:03.320586 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:03.561872 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:03.820293 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:04.061606 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:04.320036 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:04.562521 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:04.821475 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:05.118907 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:05.320943 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:05.562948 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:05.821225 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:06.063288 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:06.321680 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:06.562165 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:06.821089 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:07.062988 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:07.320986 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:07.622151 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:07.820502 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:08.069920 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:08.321459 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:08.616891 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:08.821170 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:09.062404 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:09.330740 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:09.563026 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:09.820149 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:10.084760 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:10.320132 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:10.562480 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:10.822428 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:11.061408 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:11.320829 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:11.561675 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:11.820996 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:12.064043 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:12.320481 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:12.561630 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:26:12.820843 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:13.061649 2544921 kapi.go:107] duration metric: took 45.504975349s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 10:26:13.320276 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:13.820054 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:14.320964 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:14.820494 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:15.319909 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:15.819851 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:16.320283 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:16.820733 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:17.321053 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:17.823159 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:18.320092 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:18.820964 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:19.320048 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:19.820142 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:20.320361 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:20.820863 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:21.320328 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:21.820708 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:22.320011 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:22.821154 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:23.320047 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:23.820092 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:24.320536 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:24.821277 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:25.319901 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:25.820906 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:26.320380 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:26.820752 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:27.320099 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:27.820360 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:28.320565 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:28.820826 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:29.320844 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:29.821194 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:30.321400 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:30.820579 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:31.321032 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:31.820861 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:32.321598 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:32.821251 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:33.321906 2544921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:26:33.820484 2544921 kapi.go:107] duration metric: took 1m9.504590784s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0930 10:26:51.035557 2544921 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:26:51.035580 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:51.515120 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:52.015681 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:52.515795 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:53.015397 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:53.514468 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:54.015873 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:54.516024 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:55.017789 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:55.514961 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:56.015008 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:56.514700 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:57.016374 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:57.514683 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:58.014960 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:58.514373 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:59.014522 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:26:59.514458 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:00.029024 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:00.515556 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:01.014727 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:01.514864 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:02.015280 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:02.514430 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:03.015203 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:03.515298 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:04.014626 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:04.514820 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:05.015526 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:05.514168 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:06.018018 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:06.514711 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:07.014624 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:07.515109 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:08.015189 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:08.514354 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:09.015134 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:09.514063 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:10.023407 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:10.514068 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:11.015268 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:11.514961 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:12.015593 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:12.514779 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:13.014871 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:13.514585 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:14.014673 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:14.514294 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:15.025083 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:15.515067 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:16.014573 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:16.514329 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:17.014794 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:17.514970 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:18.015041 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:18.514307 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:19.014726 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:19.514203 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:20.016696 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:20.514279 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:21.018792 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:21.514704 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:22.015167 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:22.514437 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:23.015334 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:23.514423 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:24.015489 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:24.514405 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:25.015447 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:25.514207 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:26.015036 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:26.514782 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:27.014525 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:27.514551 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:28.014838 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:28.514699 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:29.014544 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:29.514820 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:30.015351 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:30.514071 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:31.015408 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:31.514519 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:32.015220 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:32.515196 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:33.015694 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:33.515395 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:34.014637 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:34.514093 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:35.016188 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:35.514831 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:36.024185 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:36.515324 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:37.016670 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:37.515414 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:38.014795 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:38.515228 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:39.016227 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:39.515286 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:40.029695 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:40.514508 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:41.014721 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:41.514806 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:42.017173 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:42.515088 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:43.014918 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:43.514628 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:44.014883 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:44.514657 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:45.018362 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:45.514915 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:46.014913 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:46.514363 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:47.014507 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:47.515176 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:48.015555 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:48.514143 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:49.015178 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:49.515725 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:50.021546 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:50.514120 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:51.014739 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:51.515443 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:52.014345 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:52.514955 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:53.015144 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:53.514453 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:54.016432 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:54.514692 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:55.017855 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:55.514400 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:56.014586 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:56.514363 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:57.015060 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:57.514917 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:58.015234 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:58.522592 2544921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:27:59.082206 2544921 kapi.go:107] duration metric: took 2m30.071289773s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 10:27:59.084247 2544921 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-472765 cluster.
	I0930 10:27:59.086153 2544921 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 10:27:59.087769 2544921 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 10:27:59.089742 2544921 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, storage-provisioner-rancher, volcano, cloud-spanner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0930 10:27:59.091498 2544921 addons.go:510] duration metric: took 2m42.798861008s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner storage-provisioner-rancher volcano cloud-spanner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0930 10:27:59.091543 2544921 start.go:246] waiting for cluster config update ...
	I0930 10:27:59.091564 2544921 start.go:255] writing updated cluster config ...
	I0930 10:27:59.091914 2544921 ssh_runner.go:195] Run: rm -f paused
	I0930 10:27:59.438387 2544921 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 10:27:59.440469 2544921 out.go:177] * Done! kubectl is now configured to use "addons-472765" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b588f3e74c5c8       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   7cef480580fea       gcp-auth-89d5ffd79-4b2l7
	d42f0789fa654       6aa88c604f2b4       4 minutes ago       Running             volcano-scheduler                        1                   8aab17947f4e1       volcano-scheduler-6c9778cbdf-lqdvw
	d55a2e5f181f2       1a9605c872c1d       4 minutes ago       Running             admission                                0                   977762acdd983       volcano-admission-5874dfdd79-zml2h
	0bd85591d7a87       289a818c8d9c5       4 minutes ago       Running             controller                               0                   cf20a6033bc4d       ingress-nginx-controller-bc57996ff-vfnjc
	92c97f1e1b440       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   fe19c9baefbcb       csi-hostpathplugin-wl5pv
	9ce09ea56facc       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   fe19c9baefbcb       csi-hostpathplugin-wl5pv
	379717b27dc6c       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   fe19c9baefbcb       csi-hostpathplugin-wl5pv
	e250c6c90f79b       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   fe19c9baefbcb       csi-hostpathplugin-wl5pv
	df3f141330d50       23cbb28ae641a       5 minutes ago       Running             volcano-controllers                      0                   2c79cf5811b67       volcano-controllers-789ffc5785-kmkvq
	a6f58e21702e2       420193b27261a       5 minutes ago       Exited              patch                                    0                   cf7887e9fad8b       ingress-nginx-admission-patch-9v6mw
	f099083b08822       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   fe19c9baefbcb       csi-hostpathplugin-wl5pv
	472f855391eda       420193b27261a       5 minutes ago       Exited              create                                   0                   9b4da4c666964       ingress-nginx-admission-create-gc9sx
	bf75e627397dc       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   9957d7d111048       metrics-server-84c5f94fbc-8r8w8
	962c10b4715bc       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   85bd78896e835       snapshot-controller-56fcc65765-5ccvc
	1d4876c317f83       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   5327b9d58b102       snapshot-controller-56fcc65765-xhqvs
	a660619910281       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   96c9b34baa783       cloud-spanner-emulator-5b584cc74-6m9r4
	827332b984539       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   f991998bacb15       local-path-provisioner-86d989889c-bhbsl
	605514abea1c5       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   083f18ec5b389       registry-66c9cd494c-tbdxk
	e5a8728e36db9       f7ed138f698f6       5 minutes ago       Running             registry-proxy                           0                   25ba195d71211       registry-proxy-xd6th
	cda4b6dc28eed       77bdba588b953       5 minutes ago       Running             yakd                                     0                   3de46b44dda27       yakd-dashboard-67d98fc6b-2qzzq
	39b2d0291257b       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   e972834bc559e       nvidia-device-plugin-daemonset-dzrd4
	b53b54479c94f       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   920f985dc5ae2       csi-hostpath-attacher-0
	37c9229d318b0       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   d872d163ff76e       csi-hostpath-resizer-0
	26dfa3fc9e106       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   fe19c9baefbcb       csi-hostpathplugin-wl5pv
	63d30339756bf       6aa88c604f2b4       5 minutes ago       Exited              volcano-scheduler                        0                   8aab17947f4e1       volcano-scheduler-6c9778cbdf-lqdvw
	a5b65bd4ffa99       4f725bf50aaa5       5 minutes ago       Running             gadget                                   0                   0ec1517d24ed7       gadget-v6fkb
	eec8133237766       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   2c59c98b2a772       kube-ingress-dns-minikube
	dd7be19d2b2d8       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   fc2e680c6b291       coredns-7c65d6cfc9-xf8vk
	372ac1a324c40       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   13435cf28a0a9       storage-provisioner
	6454808cbe6a1       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   01e480245a56b       kindnet-wjzdr
	ee0f198764c53       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   98bb9812152e6       kube-proxy-xvdqn
	eab734a94f28a       27e3830e14027       6 minutes ago       Running             etcd                                     0                   1011da7b9ea72       etcd-addons-472765
	bd9a328e85c36       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   adf14d85c0d26       kube-scheduler-addons-472765
	dbed1ba2d4c67       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   79e14e72e8f71       kube-controller-manager-addons-472765
	ba3bd29678d7e       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   2928de452d511       kube-apiserver-addons-472765
	
	
	==> containerd <==
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.099114718Z" level=info msg="TearDown network for sandbox \"2ced06b74d7755041559e0fcb1caba6af904e83f4b61d008c436ec7b327ddf5e\" successfully"
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.099158123Z" level=info msg="StopPodSandbox for \"2ced06b74d7755041559e0fcb1caba6af904e83f4b61d008c436ec7b327ddf5e\" returns successfully"
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.099972222Z" level=info msg="RemovePodSandbox for \"2ced06b74d7755041559e0fcb1caba6af904e83f4b61d008c436ec7b327ddf5e\""
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.100017063Z" level=info msg="Forcibly stopping sandbox \"2ced06b74d7755041559e0fcb1caba6af904e83f4b61d008c436ec7b327ddf5e\""
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.107821272Z" level=info msg="TearDown network for sandbox \"2ced06b74d7755041559e0fcb1caba6af904e83f4b61d008c436ec7b327ddf5e\" successfully"
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.114283378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ced06b74d7755041559e0fcb1caba6af904e83f4b61d008c436ec7b327ddf5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.114422208Z" level=info msg="RemovePodSandbox \"2ced06b74d7755041559e0fcb1caba6af904e83f4b61d008c436ec7b327ddf5e\" returns successfully"
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.115029120Z" level=info msg="StopPodSandbox for \"49f13ee5ba47d63b6efe5513bdbfe4dceb2d3bcb4ea402eb2a094fc1f87f3f88\""
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.122909718Z" level=info msg="TearDown network for sandbox \"49f13ee5ba47d63b6efe5513bdbfe4dceb2d3bcb4ea402eb2a094fc1f87f3f88\" successfully"
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.122950284Z" level=info msg="StopPodSandbox for \"49f13ee5ba47d63b6efe5513bdbfe4dceb2d3bcb4ea402eb2a094fc1f87f3f88\" returns successfully"
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.123707514Z" level=info msg="RemovePodSandbox for \"49f13ee5ba47d63b6efe5513bdbfe4dceb2d3bcb4ea402eb2a094fc1f87f3f88\""
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.123751969Z" level=info msg="Forcibly stopping sandbox \"49f13ee5ba47d63b6efe5513bdbfe4dceb2d3bcb4ea402eb2a094fc1f87f3f88\""
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.131566656Z" level=info msg="TearDown network for sandbox \"49f13ee5ba47d63b6efe5513bdbfe4dceb2d3bcb4ea402eb2a094fc1f87f3f88\" successfully"
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.137597233Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49f13ee5ba47d63b6efe5513bdbfe4dceb2d3bcb4ea402eb2a094fc1f87f3f88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 30 10:28:11 addons-472765 containerd[817]: time="2024-09-30T10:28:11.137710085Z" level=info msg="RemovePodSandbox \"49f13ee5ba47d63b6efe5513bdbfe4dceb2d3bcb4ea402eb2a094fc1f87f3f88\" returns successfully"
	Sep 30 10:29:11 addons-472765 containerd[817]: time="2024-09-30T10:29:11.141908240Z" level=info msg="RemoveContainer for \"b43f52f188d3e26ec3c8c443f9f5e9bdd617edbf8840cb54985b8a3208b743e6\""
	Sep 30 10:29:11 addons-472765 containerd[817]: time="2024-09-30T10:29:11.148531365Z" level=info msg="RemoveContainer for \"b43f52f188d3e26ec3c8c443f9f5e9bdd617edbf8840cb54985b8a3208b743e6\" returns successfully"
	Sep 30 10:29:11 addons-472765 containerd[817]: time="2024-09-30T10:29:11.150613549Z" level=info msg="StopPodSandbox for \"392b22adb13d2d3abc0f2578a53c62c52b2241fed04f046eb7d3e028ea251865\""
	Sep 30 10:29:11 addons-472765 containerd[817]: time="2024-09-30T10:29:11.158778750Z" level=info msg="TearDown network for sandbox \"392b22adb13d2d3abc0f2578a53c62c52b2241fed04f046eb7d3e028ea251865\" successfully"
	Sep 30 10:29:11 addons-472765 containerd[817]: time="2024-09-30T10:29:11.158818184Z" level=info msg="StopPodSandbox for \"392b22adb13d2d3abc0f2578a53c62c52b2241fed04f046eb7d3e028ea251865\" returns successfully"
	Sep 30 10:29:11 addons-472765 containerd[817]: time="2024-09-30T10:29:11.159359004Z" level=info msg="RemovePodSandbox for \"392b22adb13d2d3abc0f2578a53c62c52b2241fed04f046eb7d3e028ea251865\""
	Sep 30 10:29:11 addons-472765 containerd[817]: time="2024-09-30T10:29:11.159517379Z" level=info msg="Forcibly stopping sandbox \"392b22adb13d2d3abc0f2578a53c62c52b2241fed04f046eb7d3e028ea251865\""
	Sep 30 10:29:11 addons-472765 containerd[817]: time="2024-09-30T10:29:11.168111484Z" level=info msg="TearDown network for sandbox \"392b22adb13d2d3abc0f2578a53c62c52b2241fed04f046eb7d3e028ea251865\" successfully"
	Sep 30 10:29:11 addons-472765 containerd[817]: time="2024-09-30T10:29:11.175269291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"392b22adb13d2d3abc0f2578a53c62c52b2241fed04f046eb7d3e028ea251865\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 30 10:29:11 addons-472765 containerd[817]: time="2024-09-30T10:29:11.175417507Z" level=info msg="RemovePodSandbox \"392b22adb13d2d3abc0f2578a53c62c52b2241fed04f046eb7d3e028ea251865\" returns successfully"
	
	
	==> coredns [dd7be19d2b2d86244f09a8fb2b1413baf7836782e407f52bf087040042c5c614] <==
	[INFO] 10.244.0.8:45143 - 44680 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000057862s
	[INFO] 10.244.0.8:45143 - 46756 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001586826s
	[INFO] 10.244.0.8:45143 - 37940 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001495528s
	[INFO] 10.244.0.8:45143 - 25916 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000066896s
	[INFO] 10.244.0.8:45143 - 63106 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000064951s
	[INFO] 10.244.0.8:36157 - 32526 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104179s
	[INFO] 10.244.0.8:36157 - 32781 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000054375s
	[INFO] 10.244.0.8:38509 - 17517 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082757s
	[INFO] 10.244.0.8:38509 - 17706 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037505s
	[INFO] 10.244.0.8:48200 - 20423 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000469051s
	[INFO] 10.244.0.8:48200 - 20620 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048828s
	[INFO] 10.244.0.8:50654 - 33263 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001807919s
	[INFO] 10.244.0.8:50654 - 33468 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001267961s
	[INFO] 10.244.0.8:34914 - 50055 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000066879s
	[INFO] 10.244.0.8:34914 - 50194 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000038268s
	[INFO] 10.244.0.24:36323 - 55376 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000190481s
	[INFO] 10.244.0.24:48135 - 47691 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000163445s
	[INFO] 10.244.0.24:43836 - 31360 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000311046s
	[INFO] 10.244.0.24:38020 - 63165 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000384055s
	[INFO] 10.244.0.24:48082 - 5157 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000238718s
	[INFO] 10.244.0.24:42179 - 10137 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000227715s
	[INFO] 10.244.0.24:52255 - 24330 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00313697s
	[INFO] 10.244.0.24:56017 - 15566 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.0034991s
	[INFO] 10.244.0.24:43525 - 10353 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004078116s
	[INFO] 10.244.0.24:40959 - 36555 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003744645s
	
	
	==> describe nodes <==
	Name:               addons-472765
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-472765
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=addons-472765
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T10_25_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-472765
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-472765"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 10:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-472765
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 10:31:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 10:28:15 +0000   Mon, 30 Sep 2024 10:25:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 10:28:15 +0000   Mon, 30 Sep 2024 10:25:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 10:28:15 +0000   Mon, 30 Sep 2024 10:25:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 10:28:15 +0000   Mon, 30 Sep 2024 10:25:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-472765
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 34f4178edda54191b2f98205a24fcbe8
	  System UUID:                66113858-d0eb-49dd-b462-fd894772a847
	  Boot ID:                    65cfb3b2-92d4-49d4-b46a-56cf6adc9d81
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-6m9r4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  gadget                      gadget-v6fkb                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-4b2l7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vfnjc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m54s
	  kube-system                 coredns-7c65d6cfc9-xf8vk                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpathplugin-wl5pv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 etcd-addons-472765                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m7s
	  kube-system                 kindnet-wjzdr                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-472765                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-472765       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-proxy-xvdqn                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-472765                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 metrics-server-84c5f94fbc-8r8w8             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-dzrd4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-66c9cd494c-tbdxk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-xd6th                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-5ccvc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-xhqvs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-bhbsl     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-5874dfdd79-zml2h          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-controllers-789ffc5785-kmkvq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-6c9778cbdf-lqdvw          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-2qzzq              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m2s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m15s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m14s (x8 over 6m15s)  kubelet          Node addons-472765 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     6m14s (x7 over 6m15s)  kubelet          Node addons-472765 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m14s (x7 over 6m15s)  kubelet          Node addons-472765 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s                   kubelet          Node addons-472765 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s                   kubelet          Node addons-472765 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s                   kubelet          Node addons-472765 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s                   node-controller  Node addons-472765 event: Registered Node addons-472765 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [eab734a94f28a3482d7381a8248629f4b0eace6f92ffe9490c1fb364c0edd991] <==
	{"level":"info","ts":"2024-09-30T10:25:05.051307Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T10:25:05.051810Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T10:25:05.051976Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T10:25:05.052206Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-30T10:25:05.052307Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-30T10:25:05.935661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-30T10:25:05.935779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-30T10:25:05.935818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-30T10:25:05.935912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-30T10:25:05.935963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-30T10:25:05.936004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-30T10:25:05.936051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-30T10:25:05.939370Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:25:05.941892Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-472765 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T10:25:05.942143Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:25:05.942223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:25:05.942474Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T10:25:05.942567Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T10:25:05.944112Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:25:05.948443Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T10:25:05.952466Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:25:05.959941Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-30T10:25:05.955946Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:25:05.963669Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:25:05.963852Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [b588f3e74c5c80b33345b9c0a75d5ce82c1d1a2c1d41869080b1d2f8b562abfc] <==
	2024/09/30 10:27:58 GCP Auth Webhook started!
	2024/09/30 10:28:15 Ready to marshal response ...
	2024/09/30 10:28:15 Ready to write response ...
	2024/09/30 10:28:16 Ready to marshal response ...
	2024/09/30 10:28:16 Ready to write response ...
	
	
	==> kernel <==
	 10:31:18 up 1 day, 18:13,  0 users,  load average: 0.20, 1.26, 2.19
	Linux addons-472765 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6454808cbe6a1ae46051239484dbfd2144a2217b8a9f70cd82908b04683e6fa5] <==
	I0930 10:29:16.336292       1 main.go:299] handling current node
	I0930 10:29:26.343705       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:29:26.343753       1 main.go:299] handling current node
	I0930 10:29:36.343767       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:29:36.343800       1 main.go:299] handling current node
	I0930 10:29:46.344036       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:29:46.344076       1 main.go:299] handling current node
	I0930 10:29:56.340633       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:29:56.340737       1 main.go:299] handling current node
	I0930 10:30:06.335767       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:30:06.335874       1 main.go:299] handling current node
	I0930 10:30:16.336398       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:30:16.336432       1 main.go:299] handling current node
	I0930 10:30:26.339682       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:30:26.339716       1 main.go:299] handling current node
	I0930 10:30:36.335677       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:30:36.335714       1 main.go:299] handling current node
	I0930 10:30:46.343725       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:30:46.343763       1 main.go:299] handling current node
	I0930 10:30:56.344159       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:30:56.344196       1 main.go:299] handling current node
	I0930 10:31:06.336168       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:31:06.336204       1 main.go:299] handling current node
	I0930 10:31:16.336356       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:31:16.336390       1 main.go:299] handling current node
	
	
	==> kube-apiserver [ba3bd29678d7ef90309a87997b2b8c52af18b0428bd6c60c5d1ef2799a6de117] <==
	W0930 10:26:29.758518       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:30.825307       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:31.921921       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:31.999616       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.63.163:443: connect: connection refused
	E0930 10:26:31.999654       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.63.163:443: connect: connection refused" logger="UnhandledError"
	W0930 10:26:32.004862       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:32.048216       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.63.163:443: connect: connection refused
	E0930 10:26:32.048264       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.63.163:443: connect: connection refused" logger="UnhandledError"
	W0930 10:26:32.049933       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:33.035270       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:34.187429       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:35.277645       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:36.333602       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:37.387784       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:38.007126       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:39.090508       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:40.121557       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.188.197:443: connect: connection refused
	W0930 10:26:50.951339       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.63.163:443: connect: connection refused
	E0930 10:26:50.951378       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.63.163:443: connect: connection refused" logger="UnhandledError"
	W0930 10:27:32.018475       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.63.163:443: connect: connection refused
	E0930 10:27:32.018523       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.63.163:443: connect: connection refused" logger="UnhandledError"
	W0930 10:27:32.056396       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.63.163:443: connect: connection refused
	E0930 10:27:32.056544       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.63.163:443: connect: connection refused" logger="UnhandledError"
	I0930 10:28:15.975195       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0930 10:28:16.016633       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [dbed1ba2d4c671930b99baf79fd9cf1526ca5dff98188aebf6f2f0dba5ca11c5] <==
	I0930 10:27:32.061199       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0930 10:27:32.068236       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0930 10:27:32.077501       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0930 10:27:32.083633       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0930 10:27:32.095065       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0930 10:27:32.829524       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0930 10:27:33.853373       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0930 10:27:33.869784       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0930 10:27:34.866972       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0930 10:27:34.994326       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0930 10:27:35.017548       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0930 10:27:36.015849       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0930 10:27:36.028283       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0930 10:27:36.028385       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0930 10:27:36.039852       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0930 10:27:36.043411       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0930 10:27:36.046042       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0930 10:27:58.976641       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="15.508094ms"
	I0930 10:27:58.976724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="48.911µs"
	I0930 10:28:06.038347       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0930 10:28:06.043381       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0930 10:28:06.096467       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0930 10:28:06.101537       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0930 10:28:15.683642       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0930 10:28:15.948191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-472765"
	
	
	==> kube-proxy [ee0f198764c5398dafa7afe280ec257a9bdb09413618637a225a0880aec72202] <==
	I0930 10:25:15.769252       1 server_linux.go:66] "Using iptables proxy"
	I0930 10:25:15.873797       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0930 10:25:15.873947       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 10:25:15.907944       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0930 10:25:15.908000       1 server_linux.go:169] "Using iptables Proxier"
	I0930 10:25:15.910636       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 10:25:15.911376       1 server.go:483] "Version info" version="v1.31.1"
	I0930 10:25:15.911406       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:25:15.918811       1 config.go:199] "Starting service config controller"
	I0930 10:25:15.918853       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 10:25:15.918881       1 config.go:105] "Starting endpoint slice config controller"
	I0930 10:25:15.918886       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 10:25:15.921565       1 config.go:328] "Starting node config controller"
	I0930 10:25:15.921594       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 10:25:16.018954       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 10:25:16.019191       1 shared_informer.go:320] Caches are synced for service config
	I0930 10:25:16.023127       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd9a328e85c36d4085de5431f23ad9fe7cb94625aab62dde606e23ba02579bc3] <==
	W0930 10:25:08.624341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 10:25:08.624371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 10:25:08.624451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 10:25:08.624500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:25:08.624595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 10:25:08.624613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:25:08.624693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 10:25:08.624778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:25:08.624866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 10:25:08.624884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 10:25:08.624959       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0930 10:25:08.624990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:25:08.625041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 10:25:08.625070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:25:09.442639       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 10:25:09.442920       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 10:25:09.517065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 10:25:09.517115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:25:09.534171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 10:25:09.534426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 10:25:09.554645       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 10:25:09.554872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:25:09.584546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 10:25:09.584802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 10:25:11.409839       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 10:27:33 addons-472765 kubelet[1480]: I0930 10:27:33.842749    1480 scope.go:117] "RemoveContainer" containerID="c97f8fec7e79d1e4823d471858e2aa81d9958df65f98f8c57bbf3df0a8ec5e46"
	Sep 30 10:27:35 addons-472765 kubelet[1480]: I0930 10:27:35.156320    1480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwfzv\" (UniqueName: \"kubernetes.io/projected/515c9c34-c69a-4d4b-9e98-3149ec4af887-kube-api-access-jwfzv\") pod \"515c9c34-c69a-4d4b-9e98-3149ec4af887\" (UID: \"515c9c34-c69a-4d4b-9e98-3149ec4af887\") "
	Sep 30 10:27:35 addons-472765 kubelet[1480]: I0930 10:27:35.156400    1480 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j497w\" (UniqueName: \"kubernetes.io/projected/00e253c1-f531-4a31-873b-8f60a7ce35ee-kube-api-access-j497w\") pod \"00e253c1-f531-4a31-873b-8f60a7ce35ee\" (UID: \"00e253c1-f531-4a31-873b-8f60a7ce35ee\") "
	Sep 30 10:27:35 addons-472765 kubelet[1480]: I0930 10:27:35.158836    1480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e253c1-f531-4a31-873b-8f60a7ce35ee-kube-api-access-j497w" (OuterVolumeSpecName: "kube-api-access-j497w") pod "00e253c1-f531-4a31-873b-8f60a7ce35ee" (UID: "00e253c1-f531-4a31-873b-8f60a7ce35ee"). InnerVolumeSpecName "kube-api-access-j497w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:27:35 addons-472765 kubelet[1480]: I0930 10:27:35.159057    1480 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515c9c34-c69a-4d4b-9e98-3149ec4af887-kube-api-access-jwfzv" (OuterVolumeSpecName: "kube-api-access-jwfzv") pod "515c9c34-c69a-4d4b-9e98-3149ec4af887" (UID: "515c9c34-c69a-4d4b-9e98-3149ec4af887"). InnerVolumeSpecName "kube-api-access-jwfzv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:27:35 addons-472765 kubelet[1480]: I0930 10:27:35.257722    1480 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jwfzv\" (UniqueName: \"kubernetes.io/projected/515c9c34-c69a-4d4b-9e98-3149ec4af887-kube-api-access-jwfzv\") on node \"addons-472765\" DevicePath \"\""
	Sep 30 10:27:35 addons-472765 kubelet[1480]: I0930 10:27:35.257779    1480 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j497w\" (UniqueName: \"kubernetes.io/projected/00e253c1-f531-4a31-873b-8f60a7ce35ee-kube-api-access-j497w\") on node \"addons-472765\" DevicePath \"\""
	Sep 30 10:27:35 addons-472765 kubelet[1480]: I0930 10:27:35.855423    1480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49f13ee5ba47d63b6efe5513bdbfe4dceb2d3bcb4ea402eb2a094fc1f87f3f88"
	Sep 30 10:27:35 addons-472765 kubelet[1480]: I0930 10:27:35.860813    1480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ced06b74d7755041559e0fcb1caba6af904e83f4b61d008c436ec7b327ddf5e"
	Sep 30 10:27:56 addons-472765 kubelet[1480]: I0930 10:27:56.052994    1480 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xd6th" secret="" err="secret \"gcp-auth\" not found"
	Sep 30 10:28:01 addons-472765 kubelet[1480]: I0930 10:28:01.053663    1480 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dzrd4" secret="" err="secret \"gcp-auth\" not found"
	Sep 30 10:28:06 addons-472765 kubelet[1480]: I0930 10:28:06.067833    1480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-89d5ffd79-4b2l7" podStartSLOduration=73.115130906 podStartE2EDuration="1m16.067811187s" podCreationTimestamp="2024-09-30 10:26:50 +0000 UTC" firstStartedPulling="2024-09-30 10:27:55.324708767 +0000 UTC m=+164.429621647" lastFinishedPulling="2024-09-30 10:27:58.27738904 +0000 UTC m=+167.382301928" observedRunningTime="2024-09-30 10:27:58.969861213 +0000 UTC m=+168.074774101" watchObservedRunningTime="2024-09-30 10:28:06.067811187 +0000 UTC m=+175.172724075"
	Sep 30 10:28:07 addons-472765 kubelet[1480]: I0930 10:28:07.057356    1480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00e253c1-f531-4a31-873b-8f60a7ce35ee" path="/var/lib/kubelet/pods/00e253c1-f531-4a31-873b-8f60a7ce35ee/volumes"
	Sep 30 10:28:07 addons-472765 kubelet[1480]: I0930 10:28:07.057878    1480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="515c9c34-c69a-4d4b-9e98-3149ec4af887" path="/var/lib/kubelet/pods/515c9c34-c69a-4d4b-9e98-3149ec4af887/volumes"
	Sep 30 10:28:11 addons-472765 kubelet[1480]: I0930 10:28:11.072821    1480 scope.go:117] "RemoveContainer" containerID="d2585b5bca1a24e6c70fb1c3e57b6f67bd213598be0f816a7dd652079da03963"
	Sep 30 10:28:11 addons-472765 kubelet[1480]: I0930 10:28:11.081188    1480 scope.go:117] "RemoveContainer" containerID="12737b12755b01a51f4d9a30f394e3565f9296cd164f58228178b57360de03d8"
	Sep 30 10:28:17 addons-472765 kubelet[1480]: I0930 10:28:17.056147    1480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c859d826-f766-400b-9fb0-be54875dedd3" path="/var/lib/kubelet/pods/c859d826-f766-400b-9fb0-be54875dedd3/volumes"
	Sep 30 10:28:32 addons-472765 kubelet[1480]: I0930 10:28:32.053473    1480 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-tbdxk" secret="" err="secret \"gcp-auth\" not found"
	Sep 30 10:29:10 addons-472765 kubelet[1480]: I0930 10:29:10.053593    1480 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xd6th" secret="" err="secret \"gcp-auth\" not found"
	Sep 30 10:29:11 addons-472765 kubelet[1480]: I0930 10:29:11.140540    1480 scope.go:117] "RemoveContainer" containerID="b43f52f188d3e26ec3c8c443f9f5e9bdd617edbf8840cb54985b8a3208b743e6"
	Sep 30 10:29:21 addons-472765 kubelet[1480]: I0930 10:29:21.054321    1480 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dzrd4" secret="" err="secret \"gcp-auth\" not found"
	Sep 30 10:29:37 addons-472765 kubelet[1480]: I0930 10:29:37.052940    1480 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-tbdxk" secret="" err="secret \"gcp-auth\" not found"
	Sep 30 10:30:25 addons-472765 kubelet[1480]: I0930 10:30:25.053093    1480 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dzrd4" secret="" err="secret \"gcp-auth\" not found"
	Sep 30 10:30:25 addons-472765 kubelet[1480]: I0930 10:30:25.054092    1480 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xd6th" secret="" err="secret \"gcp-auth\" not found"
	Sep 30 10:30:42 addons-472765 kubelet[1480]: I0930 10:30:42.053179    1480 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-tbdxk" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [372ac1a324c409c0427c81c647f5e06aa5e540781e78993fd68000b92d0c5602] <==
	I0930 10:25:22.169623       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 10:25:22.223302       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 10:25:22.223348       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 10:25:22.233033       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 10:25:22.233778       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ddacbd7b-7543-48f2-979e-fd8157092d34", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-472765_21af6664-4abd-4e09-aa55-bd3aaacd0483 became leader
	I0930 10:25:22.233853       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-472765_21af6664-4abd-4e09-aa55-bd3aaacd0483!
	I0930 10:25:22.334700       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-472765_21af6664-4abd-4e09-aa55-bd3aaacd0483!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-472765 -n addons-472765
helpers_test.go:261: (dbg) Run:  kubectl --context addons-472765 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-gc9sx ingress-nginx-admission-patch-9v6mw test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-472765 describe pod ingress-nginx-admission-create-gc9sx ingress-nginx-admission-patch-9v6mw test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-472765 describe pod ingress-nginx-admission-create-gc9sx ingress-nginx-admission-patch-9v6mw test-job-nginx-0: exit status 1 (83.223305ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gc9sx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9v6mw" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-472765 describe pod ingress-nginx-admission-create-gc9sx ingress-nginx-admission-patch-9v6mw test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (376.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-852171 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0930 11:12:59.487373 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-852171 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m11.590611931s)

                                                
                                                
-- stdout --
	* [old-k8s-version-852171] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-852171" primary control-plane node in "old-k8s-version-852171" cluster
	* Pulling base image v0.0.45-1727108449-19696 ...
	* Restarting existing docker container for "old-k8s-version-852171" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-852171 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:12:44.932334 2748394 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:12:44.932503 2748394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:12:44.932509 2748394 out.go:358] Setting ErrFile to fd 2...
	I0930 11:12:44.932514 2748394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:12:44.932747 2748394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	I0930 11:12:44.933105 2748394 out.go:352] Setting JSON to false
	I0930 11:12:44.934071 2748394 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":154513,"bootTime":1727540252,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0930 11:12:44.934134 2748394 start.go:139] virtualization:  
	I0930 11:12:44.937346 2748394 out.go:177] * [old-k8s-version-852171] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 11:12:44.939835 2748394 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:12:44.939877 2748394 notify.go:220] Checking for updates...
	I0930 11:12:44.942361 2748394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:12:44.946273 2748394 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 11:12:44.948346 2748394 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	I0930 11:12:44.950414 2748394 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 11:12:44.952160 2748394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:12:44.954940 2748394 config.go:182] Loaded profile config "old-k8s-version-852171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0930 11:12:44.958256 2748394 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 11:12:44.960302 2748394 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:12:44.994849 2748394 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 11:12:44.994972 2748394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 11:12:45.150313 2748394 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:67 SystemTime:2024-09-30 11:12:45.136787246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 11:12:45.150460 2748394 docker.go:318] overlay module found
	I0930 11:12:45.153203 2748394 out.go:177] * Using the docker driver based on existing profile
	I0930 11:12:45.155735 2748394 start.go:297] selected driver: docker
	I0930 11:12:45.155772 2748394 start.go:901] validating driver "docker" against &{Name:old-k8s-version-852171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-852171 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:12:45.155905 2748394 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:12:45.156645 2748394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 11:12:45.279949 2748394 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:67 SystemTime:2024-09-30 11:12:45.251925606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 11:12:45.280416 2748394 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:12:45.280439 2748394 cni.go:84] Creating CNI manager for ""
	I0930 11:12:45.280486 2748394 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0930 11:12:45.280529 2748394 start.go:340] cluster config:
	{Name:old-k8s-version-852171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-852171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:12:45.293293 2748394 out.go:177] * Starting "old-k8s-version-852171" primary control-plane node in "old-k8s-version-852171" cluster
	I0930 11:12:45.295534 2748394 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0930 11:12:45.297906 2748394 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0930 11:12:45.300129 2748394 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0930 11:12:45.300220 2748394 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0930 11:12:45.300232 2748394 cache.go:56] Caching tarball of preloaded images
	I0930 11:12:45.300324 2748394 preload.go:172] Found /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 11:12:45.300335 2748394 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0930 11:12:45.300491 2748394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/config.json ...
	I0930 11:12:45.300743 2748394 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0930 11:12:45.342556 2748394 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon, skipping pull
	I0930 11:12:45.342580 2748394 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in daemon, skipping load
	I0930 11:12:45.342595 2748394 cache.go:194] Successfully downloaded all kic artifacts
	I0930 11:12:45.342634 2748394 start.go:360] acquireMachinesLock for old-k8s-version-852171: {Name:mk789ffe43eaa72b227f71257ce95ae6c31c0ce9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:12:45.342702 2748394 start.go:364] duration metric: took 37.259µs to acquireMachinesLock for "old-k8s-version-852171"
	I0930 11:12:45.342725 2748394 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:12:45.342730 2748394 fix.go:54] fixHost starting: 
	I0930 11:12:45.343008 2748394 cli_runner.go:164] Run: docker container inspect old-k8s-version-852171 --format={{.State.Status}}
	I0930 11:12:45.367592 2748394 fix.go:112] recreateIfNeeded on old-k8s-version-852171: state=Stopped err=<nil>
	W0930 11:12:45.367651 2748394 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:12:45.370889 2748394 out.go:177] * Restarting existing docker container for "old-k8s-version-852171" ...
	I0930 11:12:45.373678 2748394 cli_runner.go:164] Run: docker start old-k8s-version-852171
	I0930 11:12:45.820478 2748394 cli_runner.go:164] Run: docker container inspect old-k8s-version-852171 --format={{.State.Status}}
	I0930 11:12:45.849822 2748394 kic.go:430] container "old-k8s-version-852171" state is running.
	I0930 11:12:45.850193 2748394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-852171
	I0930 11:12:45.874809 2748394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/config.json ...
	I0930 11:12:45.875034 2748394 machine.go:93] provisionDockerMachine start ...
	I0930 11:12:45.875096 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:45.902955 2748394 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:45.903231 2748394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41598 <nil> <nil>}
	I0930 11:12:45.903247 2748394 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:12:45.903835 2748394 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0930 11:12:49.040019 2748394 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-852171
	
	I0930 11:12:49.040096 2748394 ubuntu.go:169] provisioning hostname "old-k8s-version-852171"
	I0930 11:12:49.040193 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:49.072793 2748394 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:49.073053 2748394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41598 <nil> <nil>}
	I0930 11:12:49.073065 2748394 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-852171 && echo "old-k8s-version-852171" | sudo tee /etc/hostname
	I0930 11:12:49.227351 2748394 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-852171
	
	I0930 11:12:49.227439 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:49.251910 2748394 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:49.252151 2748394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41598 <nil> <nil>}
	I0930 11:12:49.252175 2748394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-852171' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-852171/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-852171' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:12:49.388302 2748394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:12:49.388331 2748394 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19734-2538756/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-2538756/.minikube}
	I0930 11:12:49.388382 2748394 ubuntu.go:177] setting up certificates
	I0930 11:12:49.388394 2748394 provision.go:84] configureAuth start
	I0930 11:12:49.388468 2748394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-852171
	I0930 11:12:49.411708 2748394 provision.go:143] copyHostCerts
	I0930 11:12:49.411792 2748394 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.pem, removing ...
	I0930 11:12:49.411816 2748394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.pem
	I0930 11:12:49.411899 2748394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.pem (1078 bytes)
	I0930 11:12:49.412009 2748394 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-2538756/.minikube/cert.pem, removing ...
	I0930 11:12:49.412023 2748394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-2538756/.minikube/cert.pem
	I0930 11:12:49.412052 2748394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-2538756/.minikube/cert.pem (1123 bytes)
	I0930 11:12:49.412121 2748394 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-2538756/.minikube/key.pem, removing ...
	I0930 11:12:49.412131 2748394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-2538756/.minikube/key.pem
	I0930 11:12:49.412164 2748394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-2538756/.minikube/key.pem (1679 bytes)
	I0930 11:12:49.412220 2748394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-852171 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-852171]
	I0930 11:12:50.218596 2748394 provision.go:177] copyRemoteCerts
	I0930 11:12:50.218669 2748394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:12:50.218723 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:50.235262 2748394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41598 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/old-k8s-version-852171/id_rsa Username:docker}
	I0930 11:12:50.346655 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0930 11:12:50.373972 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0930 11:12:50.400993 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:12:50.427560 2748394 provision.go:87] duration metric: took 1.039143994s to configureAuth
	I0930 11:12:50.427583 2748394 ubuntu.go:193] setting minikube options for container-runtime
	I0930 11:12:50.427782 2748394 config.go:182] Loaded profile config "old-k8s-version-852171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0930 11:12:50.427790 2748394 machine.go:96] duration metric: took 4.552744576s to provisionDockerMachine
	I0930 11:12:50.427798 2748394 start.go:293] postStartSetup for "old-k8s-version-852171" (driver="docker")
	I0930 11:12:50.427809 2748394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:12:50.427859 2748394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:12:50.427900 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:50.452615 2748394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41598 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/old-k8s-version-852171/id_rsa Username:docker}
	I0930 11:12:50.545608 2748394 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:12:50.549125 2748394 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0930 11:12:50.549161 2748394 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0930 11:12:50.549172 2748394 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0930 11:12:50.549179 2748394 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0930 11:12:50.549189 2748394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-2538756/.minikube/addons for local assets ...
	I0930 11:12:50.549244 2748394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-2538756/.minikube/files for local assets ...
	I0930 11:12:50.549320 2748394 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-2538756/.minikube/files/etc/ssl/certs/25441572.pem -> 25441572.pem in /etc/ssl/certs
	I0930 11:12:50.549431 2748394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:12:50.558633 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/files/etc/ssl/certs/25441572.pem --> /etc/ssl/certs/25441572.pem (1708 bytes)
	I0930 11:12:50.586768 2748394 start.go:296] duration metric: took 158.955739ms for postStartSetup
	I0930 11:12:50.586927 2748394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 11:12:50.586991 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:50.606726 2748394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41598 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/old-k8s-version-852171/id_rsa Username:docker}
	I0930 11:12:50.697303 2748394 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0930 11:12:50.704776 2748394 fix.go:56] duration metric: took 5.362038077s for fixHost
	I0930 11:12:50.704798 2748394 start.go:83] releasing machines lock for "old-k8s-version-852171", held for 5.362086331s
	I0930 11:12:50.704872 2748394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-852171
	I0930 11:12:50.726252 2748394 ssh_runner.go:195] Run: cat /version.json
	I0930 11:12:50.726305 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:50.726532 2748394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:12:50.726591 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:50.756835 2748394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41598 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/old-k8s-version-852171/id_rsa Username:docker}
	I0930 11:12:50.769299 2748394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41598 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/old-k8s-version-852171/id_rsa Username:docker}
	I0930 11:12:50.863305 2748394 ssh_runner.go:195] Run: systemctl --version
	I0930 11:12:50.994208 2748394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 11:12:51.000199 2748394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0930 11:12:51.027763 2748394 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0930 11:12:51.027875 2748394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:12:51.038359 2748394 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 11:12:51.038400 2748394 start.go:495] detecting cgroup driver to use...
	I0930 11:12:51.038434 2748394 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 11:12:51.038534 2748394 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0930 11:12:51.057576 2748394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 11:12:51.076355 2748394 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:12:51.076435 2748394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:12:51.098077 2748394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:12:51.113641 2748394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:12:51.243263 2748394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:12:51.361381 2748394 docker.go:233] disabling docker service ...
	I0930 11:12:51.361501 2748394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:12:51.377735 2748394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:12:51.391809 2748394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:12:51.498013 2748394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:12:51.605071 2748394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:12:51.619146 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:12:51.639417 2748394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0930 11:12:51.650714 2748394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0930 11:12:51.664284 2748394 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0930 11:12:51.664383 2748394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0930 11:12:51.674160 2748394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 11:12:51.683924 2748394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0930 11:12:51.693582 2748394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 11:12:51.703565 2748394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:12:51.712781 2748394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0930 11:12:51.722760 2748394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:12:51.733040 2748394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:12:51.742038 2748394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:51.893428 2748394 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0930 11:12:52.137011 2748394 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0930 11:12:52.137139 2748394 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0930 11:12:52.141246 2748394 start.go:563] Will wait 60s for crictl version
	I0930 11:12:52.141356 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:12:52.145192 2748394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:12:52.197398 2748394 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0930 11:12:52.197485 2748394 ssh_runner.go:195] Run: containerd --version
	I0930 11:12:52.225453 2748394 ssh_runner.go:195] Run: containerd --version
	I0930 11:12:52.250114 2748394 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0930 11:12:52.251562 2748394 cli_runner.go:164] Run: docker network inspect old-k8s-version-852171 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 11:12:52.268737 2748394 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0930 11:12:52.272739 2748394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:12:52.283421 2748394 kubeadm.go:883] updating cluster {Name:old-k8s-version-852171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-852171 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:12:52.283545 2748394 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0930 11:12:52.283625 2748394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:12:52.330422 2748394 containerd.go:627] all images are preloaded for containerd runtime.
	I0930 11:12:52.330445 2748394 containerd.go:534] Images already preloaded, skipping extraction
	I0930 11:12:52.330504 2748394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:12:52.384849 2748394 containerd.go:627] all images are preloaded for containerd runtime.
	I0930 11:12:52.384925 2748394 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:12:52.384950 2748394 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0930 11:12:52.385097 2748394 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-852171 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-852171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:12:52.385190 2748394 ssh_runner.go:195] Run: sudo crictl info
	I0930 11:12:52.435551 2748394 cni.go:84] Creating CNI manager for ""
	I0930 11:12:52.435579 2748394 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0930 11:12:52.435590 2748394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:12:52.435636 2748394 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-852171 NodeName:old-k8s-version-852171 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0930 11:12:52.435764 2748394 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-852171"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:12:52.435833 2748394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0930 11:12:52.445393 2748394 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:12:52.445512 2748394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 11:12:52.454821 2748394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0930 11:12:52.474522 2748394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:12:52.494452 2748394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0930 11:12:52.518841 2748394 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0930 11:12:52.524384 2748394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:12:52.551625 2748394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:52.670448 2748394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:12:52.687976 2748394 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171 for IP: 192.168.76.2
	I0930 11:12:52.688045 2748394 certs.go:194] generating shared ca certs ...
	I0930 11:12:52.688077 2748394 certs.go:226] acquiring lock for ca certs: {Name:mkff6faeb681279e5ac456a1e9fb9c9dcac2d430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:52.688258 2748394 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.key
	I0930 11:12:52.688337 2748394 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.key
	I0930 11:12:52.688374 2748394 certs.go:256] generating profile certs ...
	I0930 11:12:52.688502 2748394 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.key
	I0930 11:12:52.688610 2748394 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/apiserver.key.b013a062
	I0930 11:12:52.688680 2748394 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/proxy-client.key
	I0930 11:12:52.688833 2748394 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/2544157.pem (1338 bytes)
	W0930 11:12:52.688890 2748394 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/2544157_empty.pem, impossibly tiny 0 bytes
	I0930 11:12:52.688915 2748394 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 11:12:52.688972 2748394 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem (1078 bytes)
	I0930 11:12:52.689054 2748394 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:12:52.689107 2748394 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/key.pem (1679 bytes)
	I0930 11:12:52.689178 2748394 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/files/etc/ssl/certs/25441572.pem (1708 bytes)
	I0930 11:12:52.689848 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:12:52.721119 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 11:12:52.758816 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:12:52.864949 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:12:52.924031 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0930 11:12:52.957810 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 11:12:52.984712 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:12:53.013897 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:12:53.045607 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/2544157.pem --> /usr/share/ca-certificates/2544157.pem (1338 bytes)
	I0930 11:12:53.071508 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/files/etc/ssl/certs/25441572.pem --> /usr/share/ca-certificates/25441572.pem (1708 bytes)
	I0930 11:12:53.096713 2748394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:12:53.122080 2748394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:12:53.142570 2748394 ssh_runner.go:195] Run: openssl version
	I0930 11:12:53.148279 2748394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2544157.pem && ln -fs /usr/share/ca-certificates/2544157.pem /etc/ssl/certs/2544157.pem"
	I0930 11:12:53.158311 2748394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2544157.pem
	I0930 11:12:53.161995 2748394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 10:35 /usr/share/ca-certificates/2544157.pem
	I0930 11:12:53.162115 2748394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2544157.pem
	I0930 11:12:53.169263 2748394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2544157.pem /etc/ssl/certs/51391683.0"
	I0930 11:12:53.178430 2748394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25441572.pem && ln -fs /usr/share/ca-certificates/25441572.pem /etc/ssl/certs/25441572.pem"
	I0930 11:12:53.187950 2748394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25441572.pem
	I0930 11:12:53.191575 2748394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 10:35 /usr/share/ca-certificates/25441572.pem
	I0930 11:12:53.191799 2748394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25441572.pem
	I0930 11:12:53.198995 2748394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25441572.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:12:53.208645 2748394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:12:53.218481 2748394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:53.222239 2748394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:53.222353 2748394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:53.229325 2748394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:12:53.238810 2748394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:12:53.242533 2748394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:12:53.249647 2748394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:12:53.256676 2748394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:12:53.263680 2748394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:12:53.278932 2748394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:12:53.288348 2748394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:12:53.296682 2748394 kubeadm.go:392] StartCluster: {Name:old-k8s-version-852171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-852171 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:12:53.296833 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0930 11:12:53.296927 2748394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:12:53.382098 2748394 cri.go:89] found id: "cd839508497e8be03f9a8159be49515fa9676c830305d9598c401cc752b5586e"
	I0930 11:12:53.382172 2748394 cri.go:89] found id: "37144e1d82fd46928b194266869ab94822cb7c307075051434a8baf0910be3a8"
	I0930 11:12:53.382194 2748394 cri.go:89] found id: "968b565e4c481781a61fddf1de32bef88b722dbce4ea07952ab05c3178d7147c"
	I0930 11:12:53.382219 2748394 cri.go:89] found id: "aac2d21475c261a888c6689fc91be4ffc292d1e0eab040ad68df7c73ae710f6f"
	I0930 11:12:53.382250 2748394 cri.go:89] found id: "db2335093572eb72f5de83a00411c45652f8ec375bb1ebdcfa6fae0d706b1e2a"
	I0930 11:12:53.382275 2748394 cri.go:89] found id: "2e0c3eafc3ba0696284889beac444aad70eb46423288ac9bd41aa4dd0ed4a245"
	I0930 11:12:53.382294 2748394 cri.go:89] found id: "4c39e9082952590d679ee58214b6a4fa416b2a839c581ae827b77ed10269e492"
	I0930 11:12:53.382319 2748394 cri.go:89] found id: "ac879314a70238e9a3d188a20b16c633d0913c497744edf9f8fb4e81d4d8cffc"
	I0930 11:12:53.382352 2748394 cri.go:89] found id: ""
	I0930 11:12:53.382432 2748394 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0930 11:12:53.403911 2748394 cri.go:116] JSON = null
	W0930 11:12:53.404003 2748394 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0930 11:12:53.404107 2748394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 11:12:53.414063 2748394 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 11:12:53.414132 2748394 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 11:12:53.414216 2748394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 11:12:53.423968 2748394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 11:12:53.424440 2748394 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-852171" does not appear in /home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 11:12:53.424613 2748394 kubeconfig.go:62] /home/jenkins/minikube-integration/19734-2538756/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-852171" cluster setting kubeconfig missing "old-k8s-version-852171" context setting]
	I0930 11:12:53.424974 2748394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/kubeconfig: {Name:mk7f607d1d45d210ea4523c0a214397b48972e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:53.426315 2748394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 11:12:53.438181 2748394 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0930 11:12:53.438256 2748394 kubeadm.go:597] duration metric: took 24.104194ms to restartPrimaryControlPlane
	I0930 11:12:53.438281 2748394 kubeadm.go:394] duration metric: took 141.609804ms to StartCluster
	I0930 11:12:53.438329 2748394 settings.go:142] acquiring lock: {Name:mkc704d8ddfae8fa577b296109d2f74f59988133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:53.438431 2748394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 11:12:53.439062 2748394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/kubeconfig: {Name:mk7f607d1d45d210ea4523c0a214397b48972e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:53.439318 2748394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0930 11:12:53.439698 2748394 config.go:182] Loaded profile config "old-k8s-version-852171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0930 11:12:53.439875 2748394 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 11:12:53.439975 2748394 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-852171"
	I0930 11:12:53.440005 2748394 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-852171"
	W0930 11:12:53.440073 2748394 addons.go:243] addon storage-provisioner should already be in state true
	I0930 11:12:53.440044 2748394 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-852171"
	I0930 11:12:53.440141 2748394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-852171"
	I0930 11:12:53.440448 2748394 cli_runner.go:164] Run: docker container inspect old-k8s-version-852171 --format={{.State.Status}}
	I0930 11:12:53.440706 2748394 host.go:66] Checking if "old-k8s-version-852171" exists ...
	I0930 11:12:53.441176 2748394 cli_runner.go:164] Run: docker container inspect old-k8s-version-852171 --format={{.State.Status}}
	I0930 11:12:53.440049 2748394 addons.go:69] Setting dashboard=true in profile "old-k8s-version-852171"
	I0930 11:12:53.441565 2748394 addons.go:234] Setting addon dashboard=true in "old-k8s-version-852171"
	W0930 11:12:53.441574 2748394 addons.go:243] addon dashboard should already be in state true
	I0930 11:12:53.441597 2748394 host.go:66] Checking if "old-k8s-version-852171" exists ...
	I0930 11:12:53.442024 2748394 cli_runner.go:164] Run: docker container inspect old-k8s-version-852171 --format={{.State.Status}}
	I0930 11:12:53.440055 2748394 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-852171"
	I0930 11:12:53.449430 2748394 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-852171"
	W0930 11:12:53.449459 2748394 addons.go:243] addon metrics-server should already be in state true
	I0930 11:12:53.449525 2748394 host.go:66] Checking if "old-k8s-version-852171" exists ...
	I0930 11:12:53.450040 2748394 cli_runner.go:164] Run: docker container inspect old-k8s-version-852171 --format={{.State.Status}}
	I0930 11:12:53.452962 2748394 out.go:177] * Verifying Kubernetes components...
	I0930 11:12:53.467728 2748394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:53.503754 2748394 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-852171"
	W0930 11:12:53.503782 2748394 addons.go:243] addon default-storageclass should already be in state true
	I0930 11:12:53.503807 2748394 host.go:66] Checking if "old-k8s-version-852171" exists ...
	I0930 11:12:53.504257 2748394 cli_runner.go:164] Run: docker container inspect old-k8s-version-852171 --format={{.State.Status}}
	I0930 11:12:53.511787 2748394 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 11:12:53.514919 2748394 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:12:53.514942 2748394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 11:12:53.515004 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:53.515139 2748394 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0930 11:12:53.516808 2748394 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0930 11:12:53.521057 2748394 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0930 11:12:53.521085 2748394 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0930 11:12:53.521155 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:53.539654 2748394 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 11:12:53.543784 2748394 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 11:12:53.543814 2748394 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 11:12:53.543899 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:53.593076 2748394 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 11:12:53.593102 2748394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 11:12:53.593176 2748394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-852171
	I0930 11:12:53.593390 2748394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41598 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/old-k8s-version-852171/id_rsa Username:docker}
	I0930 11:12:53.599720 2748394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41598 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/old-k8s-version-852171/id_rsa Username:docker}
	I0930 11:12:53.619725 2748394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41598 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/old-k8s-version-852171/id_rsa Username:docker}
	I0930 11:12:53.645047 2748394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41598 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/old-k8s-version-852171/id_rsa Username:docker}
	I0930 11:12:53.717207 2748394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:12:53.741953 2748394 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-852171" to be "Ready" ...
	I0930 11:12:53.812069 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:12:53.814224 2748394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 11:12:53.814258 2748394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 11:12:53.846638 2748394 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0930 11:12:53.846668 2748394 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0930 11:12:53.879189 2748394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 11:12:53.879229 2748394 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 11:12:53.919032 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 11:12:53.946559 2748394 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0930 11:12:53.946596 2748394 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0930 11:12:54.021333 2748394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 11:12:54.021421 2748394 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 11:12:54.101202 2748394 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0930 11:12:54.101241 2748394 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0930 11:12:54.124035 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0930 11:12:54.136302 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.136356 2748394 retry.go:31] will retry after 285.036458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0930 11:12:54.165670 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.165725 2748394 retry.go:31] will retry after 224.358797ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.183634 2748394 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0930 11:12:54.183692 2748394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0930 11:12:54.270998 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.271032 2748394 retry.go:31] will retry after 291.733367ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.273119 2748394 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0930 11:12:54.273138 2748394 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0930 11:12:54.292010 2748394 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0930 11:12:54.292052 2748394 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0930 11:12:54.311216 2748394 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0930 11:12:54.311252 2748394 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0930 11:12:54.331569 2748394 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0930 11:12:54.331613 2748394 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0930 11:12:54.350886 2748394 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0930 11:12:54.350910 2748394 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0930 11:12:54.370254 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0930 11:12:54.391258 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0930 11:12:54.422516 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0930 11:12:54.555845 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.555886 2748394 retry.go:31] will retry after 194.057098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.563044 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0930 11:12:54.680386 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.680475 2748394 retry.go:31] will retry after 484.676729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0930 11:12:54.699115 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.699212 2748394 retry.go:31] will retry after 260.234064ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.751075 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0930 11:12:54.768995 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.769072 2748394 retry.go:31] will retry after 194.760078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0930 11:12:54.863333 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.863423 2748394 retry.go:31] will retry after 252.692275ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:54.960072 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:12:54.964444 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0930 11:12:55.074229 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.074314 2748394 retry.go:31] will retry after 701.235141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.117116 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0930 11:12:55.120111 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.120198 2748394 retry.go:31] will retry after 596.573977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.165436 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0930 11:12:55.251874 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.251976 2748394 retry.go:31] will retry after 714.954705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0930 11:12:55.323271 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.323352 2748394 retry.go:31] will retry after 301.12773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.624808 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0930 11:12:55.717244 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0930 11:12:55.739246 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.739344 2748394 retry.go:31] will retry after 1.20500384s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.742784 2748394 node_ready.go:53] error getting node "old-k8s-version-852171": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-852171": dial tcp 192.168.76.2:8443: connect: connection refused
	I0930 11:12:55.775989 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0930 11:12:55.864136 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.864218 2748394 retry.go:31] will retry after 992.449611ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0930 11:12:55.905669 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.905752 2748394 retry.go:31] will retry after 572.567261ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:55.968024 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0930 11:12:56.099439 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:56.099556 2748394 retry.go:31] will retry after 810.324976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:56.479551 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0930 11:12:56.575472 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:56.575520 2748394 retry.go:31] will retry after 1.659808556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:56.857420 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 11:12:56.910811 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0930 11:12:56.945193 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0930 11:12:56.971762 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:56.971795 2748394 retry.go:31] will retry after 1.415426557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0930 11:12:57.074791 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:57.074827 2748394 retry.go:31] will retry after 1.81565157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0930 11:12:57.143073 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:57.143108 2748394 retry.go:31] will retry after 974.239298ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:58.118210 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0930 11:12:58.224140 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:58.224222 2748394 retry.go:31] will retry after 1.457195125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:58.236514 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:12:58.243248 2748394 node_ready.go:53] error getting node "old-k8s-version-852171": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-852171": dial tcp 192.168.76.2:8443: connect: connection refused
	W0930 11:12:58.343252 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:58.343290 2748394 retry.go:31] will retry after 1.318109864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:58.387553 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0930 11:12:58.479927 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:58.479965 2748394 retry.go:31] will retry after 2.344586912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:58.891057 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0930 11:12:58.983253 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:58.983288 2748394 retry.go:31] will retry after 1.974285521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:59.662455 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:12:59.681800 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0930 11:12:59.848132 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:59.848178 2748394 retry.go:31] will retry after 4.169781449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0930 11:12:59.892016 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:12:59.892049 2748394 retry.go:31] will retry after 2.761405584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:13:00.243330 2748394 node_ready.go:53] error getting node "old-k8s-version-852171": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-852171": dial tcp 192.168.76.2:8443: connect: connection refused
	I0930 11:13:00.825552 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0930 11:13:00.956760 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:13:00.956811 2748394 retry.go:31] will retry after 3.349610631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:13:00.957993 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0930 11:13:01.059177 2748394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:13:01.059224 2748394 retry.go:31] will retry after 3.699634523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0930 11:13:02.654275 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0930 11:13:02.743249 2748394 node_ready.go:53] error getting node "old-k8s-version-852171": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-852171": dial tcp 192.168.76.2:8443: connect: connection refused
	I0930 11:13:04.018737 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:13:04.307392 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 11:13:04.759318 2748394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0930 11:13:13.593499 2748394 node_ready.go:49] node "old-k8s-version-852171" has status "Ready":"True"
	I0930 11:13:13.593577 2748394 node_ready.go:38] duration metric: took 19.851536774s for node "old-k8s-version-852171" to be "Ready" ...
	I0930 11:13:13.593606 2748394 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:13:13.822844 2748394 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-h5pdm" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.192396 2748394 pod_ready.go:93] pod "coredns-74ff55c5b-h5pdm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.192479 2748394 pod_ready.go:82] duration metric: took 369.556839ms for pod "coredns-74ff55c5b-h5pdm" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.192508 2748394 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-852171" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.335848 2748394 pod_ready.go:93] pod "etcd-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.335922 2748394 pod_ready.go:82] duration metric: took 143.392251ms for pod "etcd-old-k8s-version-852171" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.335954 2748394 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-852171" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.397501 2748394 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.397578 2748394 pod_ready.go:82] duration metric: took 61.600891ms for pod "kube-apiserver-old-k8s-version-852171" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.397606 2748394 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.175721 2748394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (12.521401408s)
	I0930 11:13:16.360952 2748394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.34217236s)
	I0930 11:13:16.426401 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:16.522573 2748394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.21511051s)
	I0930 11:13:16.522609 2748394 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-852171"
	I0930 11:13:16.862237 2748394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.102863361s)
	I0930 11:13:16.864421 2748394 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-852171 addons enable metrics-server
	
	I0930 11:13:16.866093 2748394 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0930 11:13:16.867767 2748394 addons.go:510] duration metric: took 23.427908106s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0930 11:13:18.903269 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:20.904942 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:23.403757 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:25.405163 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:27.903978 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:29.904320 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:32.404080 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:34.421956 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:36.903722 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:38.905487 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:41.405347 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:43.905253 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:46.403914 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:48.404394 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:50.436861 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:52.904737 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:55.404173 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:57.404600 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:13:59.903440 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:01.903924 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:04.403430 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:06.403964 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:08.405089 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:10.905424 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:12.905460 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:15.426479 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:17.904956 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:20.406141 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:22.903534 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:23.405703 2748394 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:23.405737 2748394 pod_ready.go:82] duration metric: took 1m9.008109425s for pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:23.405750 2748394 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxvn5" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:23.411347 2748394 pod_ready.go:93] pod "kube-proxy-kxvn5" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:23.411374 2748394 pod_ready.go:82] duration metric: took 5.616076ms for pod "kube-proxy-kxvn5" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:23.411392 2748394 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:25.420522 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:27.917173 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:29.918214 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:31.921230 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:34.417899 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:36.917779 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:38.918074 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:41.417776 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:43.917076 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:44.417984 2748394 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:44.418012 2748394 pod_ready.go:82] duration metric: took 21.006611311s for pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:44.418024 2748394 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:46.423144 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:48.424072 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:50.424451 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:52.424626 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:54.924126 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:56.924181 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:59.424088 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:01.425237 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:03.925218 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:06.424233 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:08.425749 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:10.923868 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:12.924116 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:15.425208 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:17.924777 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:20.425696 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:22.923761 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:24.924125 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:27.425113 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:29.924057 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:31.924187 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:34.426183 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:36.924440 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:39.424402 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:41.924854 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:44.425247 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:46.924157 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:48.924885 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:51.424900 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:53.926245 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:56.424530 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:58.924578 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:01.424551 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:03.424601 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:05.425106 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:07.425445 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:09.924554 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:11.934902 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:14.424332 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:16.424431 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:18.424980 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:20.924837 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:23.424418 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:25.424858 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:27.923876 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:29.924751 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:32.423819 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:34.424200 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:36.424886 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:38.425202 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:40.923754 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:43.423383 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:45.427788 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:47.923761 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:50.424019 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:52.424796 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:55.019233 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:57.424822 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:59.923898 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:01.925374 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:03.926217 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:06.424675 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:08.425276 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:10.928976 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:13.424736 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:15.424947 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:17.924977 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:20.424782 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:22.932296 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:25.424174 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:27.424416 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:29.425018 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:31.924081 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:33.924510 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:35.927130 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:38.424967 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:40.425307 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:42.924148 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:45.521204 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:47.923758 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:49.924104 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:51.925007 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:54.427003 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:56.923590 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:58.924034 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:01.424345 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:03.424683 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:05.424880 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:07.924618 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:10.423907 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:12.924252 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:14.924372 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:17.425678 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:19.924542 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:21.924726 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:23.928413 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:26.423641 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:28.424571 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:30.425349 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:32.925641 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:35.425447 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:37.426126 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:39.924012 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:42.425404 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:44.424651 2748394 pod_ready.go:82] duration metric: took 4m0.006611245s for pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace to be "Ready" ...
	E0930 11:18:44.424680 2748394 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 11:18:44.424690 2748394 pod_ready.go:39] duration metric: took 5m30.831059534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:18:44.424706 2748394 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:18:44.424735 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0930 11:18:44.424798 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 11:18:44.462050 2748394 cri.go:89] found id: "27928206b912b5caa53bfc5467de1638284a81a43e7262d3661bd5d8430a9d7f"
	I0930 11:18:44.462072 2748394 cri.go:89] found id: "2e0c3eafc3ba0696284889beac444aad70eb46423288ac9bd41aa4dd0ed4a245"
	I0930 11:18:44.462077 2748394 cri.go:89] found id: ""
	I0930 11:18:44.462085 2748394 logs.go:276] 2 containers: [27928206b912b5caa53bfc5467de1638284a81a43e7262d3661bd5d8430a9d7f 2e0c3eafc3ba0696284889beac444aad70eb46423288ac9bd41aa4dd0ed4a245]
	I0930 11:18:44.462139 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.465719 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.469526 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0930 11:18:44.469597 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 11:18:44.509789 2748394 cri.go:89] found id: "bf09b410e75f8b5b7a1af956c961ed5d411b85e54a06fdec3f471c80d8088e5b"
	I0930 11:18:44.509814 2748394 cri.go:89] found id: "ac879314a70238e9a3d188a20b16c633d0913c497744edf9f8fb4e81d4d8cffc"
	I0930 11:18:44.509820 2748394 cri.go:89] found id: ""
	I0930 11:18:44.509827 2748394 logs.go:276] 2 containers: [bf09b410e75f8b5b7a1af956c961ed5d411b85e54a06fdec3f471c80d8088e5b ac879314a70238e9a3d188a20b16c633d0913c497744edf9f8fb4e81d4d8cffc]
	I0930 11:18:44.509885 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.513303 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.516645 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0930 11:18:44.516757 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 11:18:44.555725 2748394 cri.go:89] found id: "71d296562fe24f66e1a29229021fba4fcf7228bfccea2b74202e1f3b5a5c5061"
	I0930 11:18:44.555749 2748394 cri.go:89] found id: "cd839508497e8be03f9a8159be49515fa9676c830305d9598c401cc752b5586e"
	I0930 11:18:44.555755 2748394 cri.go:89] found id: ""
	I0930 11:18:44.555765 2748394 logs.go:276] 2 containers: [71d296562fe24f66e1a29229021fba4fcf7228bfccea2b74202e1f3b5a5c5061 cd839508497e8be03f9a8159be49515fa9676c830305d9598c401cc752b5586e]
	I0930 11:18:44.555823 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.559329 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.562687 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0930 11:18:44.562761 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 11:18:44.602629 2748394 cri.go:89] found id: "5dad8b0ba377364fd2242c1fd0cce8f56d5513c2b51a7703f0468c415d9b95d9"
	I0930 11:18:44.602702 2748394 cri.go:89] found id: "4c39e9082952590d679ee58214b6a4fa416b2a839c581ae827b77ed10269e492"
	I0930 11:18:44.602723 2748394 cri.go:89] found id: ""
	I0930 11:18:44.602744 2748394 logs.go:276] 2 containers: [5dad8b0ba377364fd2242c1fd0cce8f56d5513c2b51a7703f0468c415d9b95d9 4c39e9082952590d679ee58214b6a4fa416b2a839c581ae827b77ed10269e492]
	I0930 11:18:44.602830 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.606549 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.609624 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0930 11:18:44.609703 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 11:18:44.646286 2748394 cri.go:89] found id: "c51e8741669e52df8a6fe07888e8c0e98e5233e0d659eef6e07e454291c68107"
	I0930 11:18:44.646318 2748394 cri.go:89] found id: "aac2d21475c261a888c6689fc91be4ffc292d1e0eab040ad68df7c73ae710f6f"
	I0930 11:18:44.646324 2748394 cri.go:89] found id: ""
	I0930 11:18:44.646331 2748394 logs.go:276] 2 containers: [c51e8741669e52df8a6fe07888e8c0e98e5233e0d659eef6e07e454291c68107 aac2d21475c261a888c6689fc91be4ffc292d1e0eab040ad68df7c73ae710f6f]
	I0930 11:18:44.646386 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.649926 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.659151 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 11:18:44.659225 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 11:18:44.707019 2748394 cri.go:89] found id: "7304773a3405a7475e221c939f968656688e4c992063f83831ace3ceda803c7c"
	I0930 11:18:44.707045 2748394 cri.go:89] found id: "db2335093572eb72f5de83a00411c45652f8ec375bb1ebdcfa6fae0d706b1e2a"
	I0930 11:18:44.707051 2748394 cri.go:89] found id: ""
	I0930 11:18:44.707058 2748394 logs.go:276] 2 containers: [7304773a3405a7475e221c939f968656688e4c992063f83831ace3ceda803c7c db2335093572eb72f5de83a00411c45652f8ec375bb1ebdcfa6fae0d706b1e2a]
	I0930 11:18:44.707118 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.710705 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.714275 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0930 11:18:44.714374 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 11:18:44.753025 2748394 cri.go:89] found id: "f5215cf9da519d5ed406523e41bc68a570586977204e40f983b753e6b12f62f1"
	I0930 11:18:44.753095 2748394 cri.go:89] found id: "37144e1d82fd46928b194266869ab94822cb7c307075051434a8baf0910be3a8"
	I0930 11:18:44.753115 2748394 cri.go:89] found id: ""
	I0930 11:18:44.753141 2748394 logs.go:276] 2 containers: [f5215cf9da519d5ed406523e41bc68a570586977204e40f983b753e6b12f62f1 37144e1d82fd46928b194266869ab94822cb7c307075051434a8baf0910be3a8]
	I0930 11:18:44.753219 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.756937 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.760578 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 11:18:44.760705 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 11:18:44.800958 2748394 cri.go:89] found id: "348449ef03663da76e752c4aa688bc8b80580838b44c841f72709b8cae477153"
	I0930 11:18:44.800997 2748394 cri.go:89] found id: ""
	I0930 11:18:44.801005 2748394 logs.go:276] 1 containers: [348449ef03663da76e752c4aa688bc8b80580838b44c841f72709b8cae477153]
	I0930 11:18:44.801064 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.804510 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0930 11:18:44.804585 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 11:18:44.850128 2748394 cri.go:89] found id: "8d88cb7d95363af60796acda1cf39e0daf68f34c71e9906563eed8aa171bda75"
	I0930 11:18:44.850153 2748394 cri.go:89] found id: "bd1018a19355d11d0c01dc9dde1d023a8af02b1ad94db1fb3d13c565b433d42e"
	I0930 11:18:44.850158 2748394 cri.go:89] found id: ""
	I0930 11:18:44.850165 2748394 logs.go:276] 2 containers: [8d88cb7d95363af60796acda1cf39e0daf68f34c71e9906563eed8aa171bda75 bd1018a19355d11d0c01dc9dde1d023a8af02b1ad94db1fb3d13c565b433d42e]
	I0930 11:18:44.850235 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.853883 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.857404 2748394 logs.go:123] Gathering logs for kube-controller-manager [7304773a3405a7475e221c939f968656688e4c992063f83831ace3ceda803c7c] ...
	I0930 11:18:44.857429 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7304773a3405a7475e221c939f968656688e4c992063f83831ace3ceda803c7c"
	I0930 11:18:44.915683 2748394 logs.go:123] Gathering logs for kube-controller-manager [db2335093572eb72f5de83a00411c45652f8ec375bb1ebdcfa6fae0d706b1e2a] ...
	I0930 11:18:44.915718 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db2335093572eb72f5de83a00411c45652f8ec375bb1ebdcfa6fae0d706b1e2a"
	I0930 11:18:44.970537 2748394 logs.go:123] Gathering logs for kubernetes-dashboard [348449ef03663da76e752c4aa688bc8b80580838b44c841f72709b8cae477153] ...
	I0930 11:18:44.970574 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 348449ef03663da76e752c4aa688bc8b80580838b44c841f72709b8cae477153"
	I0930 11:18:45.075362 2748394 logs.go:123] Gathering logs for coredns [cd839508497e8be03f9a8159be49515fa9676c830305d9598c401cc752b5586e] ...
	I0930 11:18:45.075401 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd839508497e8be03f9a8159be49515fa9676c830305d9598c401cc752b5586e"
	I0930 11:18:45.174486 2748394 logs.go:123] Gathering logs for kube-scheduler [5dad8b0ba377364fd2242c1fd0cce8f56d5513c2b51a7703f0468c415d9b95d9] ...
	I0930 11:18:45.174524 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dad8b0ba377364fd2242c1fd0cce8f56d5513c2b51a7703f0468c415d9b95d9"
	I0930 11:18:45.317114 2748394 logs.go:123] Gathering logs for kube-proxy [c51e8741669e52df8a6fe07888e8c0e98e5233e0d659eef6e07e454291c68107] ...
	I0930 11:18:45.317143 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c51e8741669e52df8a6fe07888e8c0e98e5233e0d659eef6e07e454291c68107"
	I0930 11:18:45.409941 2748394 logs.go:123] Gathering logs for kube-proxy [aac2d21475c261a888c6689fc91be4ffc292d1e0eab040ad68df7c73ae710f6f] ...
	I0930 11:18:45.409967 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aac2d21475c261a888c6689fc91be4ffc292d1e0eab040ad68df7c73ae710f6f"
	I0930 11:18:45.482169 2748394 logs.go:123] Gathering logs for kindnet [f5215cf9da519d5ed406523e41bc68a570586977204e40f983b753e6b12f62f1] ...
	I0930 11:18:45.482209 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5215cf9da519d5ed406523e41bc68a570586977204e40f983b753e6b12f62f1"
	I0930 11:18:45.530754 2748394 logs.go:123] Gathering logs for kindnet [37144e1d82fd46928b194266869ab94822cb7c307075051434a8baf0910be3a8] ...
	I0930 11:18:45.530786 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37144e1d82fd46928b194266869ab94822cb7c307075051434a8baf0910be3a8"
	I0930 11:18:45.597379 2748394 logs.go:123] Gathering logs for containerd ...
	I0930 11:18:45.597405 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0930 11:18:45.680691 2748394 logs.go:123] Gathering logs for coredns [71d296562fe24f66e1a29229021fba4fcf7228bfccea2b74202e1f3b5a5c5061] ...
	I0930 11:18:45.680728 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71d296562fe24f66e1a29229021fba4fcf7228bfccea2b74202e1f3b5a5c5061"
	I0930 11:18:45.733912 2748394 logs.go:123] Gathering logs for kube-apiserver [27928206b912b5caa53bfc5467de1638284a81a43e7262d3661bd5d8430a9d7f] ...
	I0930 11:18:45.733945 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27928206b912b5caa53bfc5467de1638284a81a43e7262d3661bd5d8430a9d7f"
	I0930 11:18:45.799948 2748394 logs.go:123] Gathering logs for kube-apiserver [2e0c3eafc3ba0696284889beac444aad70eb46423288ac9bd41aa4dd0ed4a245] ...
	I0930 11:18:45.799983 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e0c3eafc3ba0696284889beac444aad70eb46423288ac9bd41aa4dd0ed4a245"
	I0930 11:18:45.879945 2748394 logs.go:123] Gathering logs for kube-scheduler [4c39e9082952590d679ee58214b6a4fa416b2a839c581ae827b77ed10269e492] ...
	I0930 11:18:45.879980 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c39e9082952590d679ee58214b6a4fa416b2a839c581ae827b77ed10269e492"
	I0930 11:18:45.927890 2748394 logs.go:123] Gathering logs for storage-provisioner [8d88cb7d95363af60796acda1cf39e0daf68f34c71e9906563eed8aa171bda75] ...
	I0930 11:18:45.927923 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d88cb7d95363af60796acda1cf39e0daf68f34c71e9906563eed8aa171bda75"
	I0930 11:18:45.966720 2748394 logs.go:123] Gathering logs for storage-provisioner [bd1018a19355d11d0c01dc9dde1d023a8af02b1ad94db1fb3d13c565b433d42e] ...
	I0930 11:18:45.966747 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1018a19355d11d0c01dc9dde1d023a8af02b1ad94db1fb3d13c565b433d42e"
	I0930 11:18:46.022201 2748394 logs.go:123] Gathering logs for container status ...
	I0930 11:18:46.022232 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 11:18:46.079558 2748394 logs.go:123] Gathering logs for kubelet ...
	I0930 11:18:46.079588 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 11:18:46.139311 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.773445     659 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.139554 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.694924     659 reflector.go:138] object-"kube-system"/"coredns-token-qm98j": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-qm98j" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.139796 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.695012     659 reflector.go:138] object-"kube-system"/"metrics-server-token-rn6jg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-rn6jg" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.140021 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.773604     659 reflector.go:138] object-"kube-system"/"kindnet-token-zrm6d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-zrm6d" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.140246 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.773702     659 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.140469 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.773793     659 reflector.go:138] object-"kube-system"/"kube-proxy-token-2qnp8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2qnp8" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.140683 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.773857     659 reflector.go:138] object-"default"/"default-token-gr5lr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gr5lr" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.140916 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.689457     659 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jrkb5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jrkb5" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.150042 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:17 old-k8s-version-852171 kubelet[659]: E0930 11:13:17.838784     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0930 11:18:46.150236 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:17 old-k8s-version-852171 kubelet[659]: E0930 11:13:17.967882     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.153486 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:32 old-k8s-version-852171 kubelet[659]: E0930 11:13:32.371316     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0930 11:18:46.155273 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:40 old-k8s-version-852171 kubelet[659]: E0930 11:13:40.058681     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.155616 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:41 old-k8s-version-852171 kubelet[659]: E0930 11:13:41.064307     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.156152 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:45 old-k8s-version-852171 kubelet[659]: E0930 11:13:45.364824     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.156488 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:46 old-k8s-version-852171 kubelet[659]: E0930 11:13:46.682850     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.156934 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:48 old-k8s-version-852171 kubelet[659]: E0930 11:13:48.084485     659 pod_workers.go:191] Error syncing pod c6c21cee-2a02-43eb-b0b3-2097030726c9 ("storage-provisioner_kube-system(c6c21cee-2a02-43eb-b0b3-2097030726c9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c6c21cee-2a02-43eb-b0b3-2097030726c9)"
	W0930 11:18:46.157876 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:59 old-k8s-version-852171 kubelet[659]: E0930 11:13:59.117882     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.160390 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:59 old-k8s-version-852171 kubelet[659]: E0930 11:13:59.380652     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0930 11:18:46.160861 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:06 old-k8s-version-852171 kubelet[659]: E0930 11:14:06.682813     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.161050 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:11 old-k8s-version-852171 kubelet[659]: E0930 11:14:11.376076     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.161383 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:18 old-k8s-version-852171 kubelet[659]: E0930 11:14:18.362371     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.161597 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:26 old-k8s-version-852171 kubelet[659]: E0930 11:14:26.362999     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.162196 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:32 old-k8s-version-852171 kubelet[659]: E0930 11:14:32.235186     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.162528 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:36 old-k8s-version-852171 kubelet[659]: E0930 11:14:36.682863     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.164964 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:40 old-k8s-version-852171 kubelet[659]: E0930 11:14:40.370995     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0930 11:18:46.165292 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:47 old-k8s-version-852171 kubelet[659]: E0930 11:14:47.362791     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.165476 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:55 old-k8s-version-852171 kubelet[659]: E0930 11:14:55.363277     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.165805 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:00 old-k8s-version-852171 kubelet[659]: E0930 11:15:00.372648     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.165992 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:07 old-k8s-version-852171 kubelet[659]: E0930 11:15:07.363816     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.166583 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:13 old-k8s-version-852171 kubelet[659]: E0930 11:15:13.370084     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.166912 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:16 old-k8s-version-852171 kubelet[659]: E0930 11:15:16.682725     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.167098 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:18 old-k8s-version-852171 kubelet[659]: E0930 11:15:18.362719     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.167282 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:29 old-k8s-version-852171 kubelet[659]: E0930 11:15:29.362752     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.167637 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:31 old-k8s-version-852171 kubelet[659]: E0930 11:15:31.363303     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.167828 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:41 old-k8s-version-852171 kubelet[659]: E0930 11:15:41.363227     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.168157 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:43 old-k8s-version-852171 kubelet[659]: E0930 11:15:43.369318     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.168353 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:54 old-k8s-version-852171 kubelet[659]: E0930 11:15:54.362782     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.168681 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:57 old-k8s-version-852171 kubelet[659]: E0930 11:15:57.362734     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.171117 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:07 old-k8s-version-852171 kubelet[659]: E0930 11:16:07.371795     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0930 11:18:46.171445 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:11 old-k8s-version-852171 kubelet[659]: E0930 11:16:11.363023     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.171634 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:19 old-k8s-version-852171 kubelet[659]: E0930 11:16:19.364941     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.171960 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:22 old-k8s-version-852171 kubelet[659]: E0930 11:16:22.362331     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.172145 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:32 old-k8s-version-852171 kubelet[659]: E0930 11:16:32.376208     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.172734 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:35 old-k8s-version-852171 kubelet[659]: E0930 11:16:35.590451     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.173060 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:36 old-k8s-version-852171 kubelet[659]: E0930 11:16:36.682806     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.173244 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:46 old-k8s-version-852171 kubelet[659]: E0930 11:16:46.362767     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.173579 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:49 old-k8s-version-852171 kubelet[659]: E0930 11:16:49.362912     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.173765 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:57 old-k8s-version-852171 kubelet[659]: E0930 11:16:57.366810     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.174094 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:02 old-k8s-version-852171 kubelet[659]: E0930 11:17:02.362440     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.174278 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:10 old-k8s-version-852171 kubelet[659]: E0930 11:17:10.363234     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.174604 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:16 old-k8s-version-852171 kubelet[659]: E0930 11:17:16.362318     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.174790 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:22 old-k8s-version-852171 kubelet[659]: E0930 11:17:22.362662     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.175116 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:27 old-k8s-version-852171 kubelet[659]: E0930 11:17:27.362898     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.175298 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:33 old-k8s-version-852171 kubelet[659]: E0930 11:17:33.362859     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.175629 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:40 old-k8s-version-852171 kubelet[659]: E0930 11:17:40.362306     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.175813 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:47 old-k8s-version-852171 kubelet[659]: E0930 11:17:47.362810     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.176140 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:54 old-k8s-version-852171 kubelet[659]: E0930 11:17:54.362341     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.176323 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:00 old-k8s-version-852171 kubelet[659]: E0930 11:18:00.373033     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.176648 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:09 old-k8s-version-852171 kubelet[659]: E0930 11:18:09.364507     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.176834 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:14 old-k8s-version-852171 kubelet[659]: E0930 11:18:14.362968     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.177164 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:21 old-k8s-version-852171 kubelet[659]: E0930 11:18:21.363375     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.177348 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:27 old-k8s-version-852171 kubelet[659]: E0930 11:18:27.362752     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.177673 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:32 old-k8s-version-852171 kubelet[659]: E0930 11:18:32.363035     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.177859 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:42 old-k8s-version-852171 kubelet[659]: E0930 11:18:42.363229     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0930 11:18:46.177869 2748394 logs.go:123] Gathering logs for describe nodes ...
	I0930 11:18:46.177891 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 11:18:46.326435 2748394 logs.go:123] Gathering logs for etcd [bf09b410e75f8b5b7a1af956c961ed5d411b85e54a06fdec3f471c80d8088e5b] ...
	I0930 11:18:46.326467 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf09b410e75f8b5b7a1af956c961ed5d411b85e54a06fdec3f471c80d8088e5b"
	I0930 11:18:46.367406 2748394 logs.go:123] Gathering logs for etcd [ac879314a70238e9a3d188a20b16c633d0913c497744edf9f8fb4e81d4d8cffc] ...
	I0930 11:18:46.367438 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac879314a70238e9a3d188a20b16c633d0913c497744edf9f8fb4e81d4d8cffc"
	I0930 11:18:46.412274 2748394 logs.go:123] Gathering logs for dmesg ...
	I0930 11:18:46.412304 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 11:18:46.428905 2748394 out.go:358] Setting ErrFile to fd 2...
	I0930 11:18:46.428929 2748394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0930 11:18:46.428976 2748394 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0930 11:18:46.428995 2748394 out.go:270]   Sep 30 11:18:14 old-k8s-version-852171 kubelet[659]: E0930 11:18:14.362968     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 30 11:18:14 old-k8s-version-852171 kubelet[659]: E0930 11:18:14.362968     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.429003 2748394 out.go:270]   Sep 30 11:18:21 old-k8s-version-852171 kubelet[659]: E0930 11:18:21.363375     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	  Sep 30 11:18:21 old-k8s-version-852171 kubelet[659]: E0930 11:18:21.363375     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.429010 2748394 out.go:270]   Sep 30 11:18:27 old-k8s-version-852171 kubelet[659]: E0930 11:18:27.362752     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 30 11:18:27 old-k8s-version-852171 kubelet[659]: E0930 11:18:27.362752     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.429017 2748394 out.go:270]   Sep 30 11:18:32 old-k8s-version-852171 kubelet[659]: E0930 11:18:32.363035     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	  Sep 30 11:18:32 old-k8s-version-852171 kubelet[659]: E0930 11:18:32.363035     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.429032 2748394 out.go:270]   Sep 30 11:18:42 old-k8s-version-852171 kubelet[659]: E0930 11:18:42.363229     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 30 11:18:42 old-k8s-version-852171 kubelet[659]: E0930 11:18:42.363229     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0930 11:18:46.429038 2748394 out.go:358] Setting ErrFile to fd 2...
	I0930 11:18:46.429045 2748394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:18:56.430176 2748394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:18:56.441626 2748394 api_server.go:72] duration metric: took 6m3.002247707s to wait for apiserver process to appear ...
	I0930 11:18:56.441651 2748394 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:18:56.443811 2748394 out.go:201] 
	W0930 11:18:56.445728 2748394 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0930 11:18:56.445746 2748394 out.go:270] * 
	* 
	W0930 11:18:56.446688 2748394 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 11:18:56.449054 2748394 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-852171 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-852171
helpers_test.go:235: (dbg) docker inspect old-k8s-version-852171:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a096c280e7b3b895f3fe37c0a0c5d936eea6d3bd36253bad8f531da72b86d53",
	        "Created": "2024-09-30T11:09:59.608446514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2748594,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-30T11:12:45.603951007Z",
	            "FinishedAt": "2024-09-30T11:12:44.327469234Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/7a096c280e7b3b895f3fe37c0a0c5d936eea6d3bd36253bad8f531da72b86d53/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a096c280e7b3b895f3fe37c0a0c5d936eea6d3bd36253bad8f531da72b86d53/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a096c280e7b3b895f3fe37c0a0c5d936eea6d3bd36253bad8f531da72b86d53/hosts",
	        "LogPath": "/var/lib/docker/containers/7a096c280e7b3b895f3fe37c0a0c5d936eea6d3bd36253bad8f531da72b86d53/7a096c280e7b3b895f3fe37c0a0c5d936eea6d3bd36253bad8f531da72b86d53-json.log",
	        "Name": "/old-k8s-version-852171",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-852171:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-852171",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1767eab61cc617ddf235e5a81f94c54d8cdf23fb35011212241f5af5b42df54c-init/diff:/var/lib/docker/overlay2/cfa9a1331be3f2237f098c9bbe24267823c6ebd2f4d869da3f0aaddb0fb064b7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1767eab61cc617ddf235e5a81f94c54d8cdf23fb35011212241f5af5b42df54c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1767eab61cc617ddf235e5a81f94c54d8cdf23fb35011212241f5af5b42df54c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1767eab61cc617ddf235e5a81f94c54d8cdf23fb35011212241f5af5b42df54c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-852171",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-852171/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-852171",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-852171",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-852171",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a800903ee3be9a67e5c263bdccfda5dd229113c853dd310d9b0aee3433984f88",
	            "SandboxKey": "/var/run/docker/netns/a800903ee3be",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41598"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41599"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41602"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41600"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41601"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-852171": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d0c4ebe40a90c397515916d77773de39135e4d9c30caf177c2a98cf41b9e6267",
	                    "EndpointID": "c411d09357660ac9d3fffb702921775cacee8950fd2272fcce86c4ebeb54b706",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-852171",
	                        "7a096c280e7b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-852171 -n old-k8s-version-852171
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-852171 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-852171 logs -n 25: (2.569033647s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-140647 sudo                                  | cilium-140647            | jenkins | v1.34.0 | 30 Sep 24 11:08 UTC |                     |
	|         | systemctl status crio --all                            |                          |         |         |                     |                     |
	|         | --full --no-pager                                      |                          |         |         |                     |                     |
	| ssh     | -p cilium-140647 sudo                                  | cilium-140647            | jenkins | v1.34.0 | 30 Sep 24 11:08 UTC |                     |
	|         | systemctl cat crio --no-pager                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-140647 sudo find                             | cilium-140647            | jenkins | v1.34.0 | 30 Sep 24 11:08 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                          |         |         |                     |                     |
	| ssh     | -p cilium-140647 sudo crio                             | cilium-140647            | jenkins | v1.34.0 | 30 Sep 24 11:08 UTC |                     |
	|         | config                                                 |                          |         |         |                     |                     |
	| delete  | -p cilium-140647                                       | cilium-140647            | jenkins | v1.34.0 | 30 Sep 24 11:08 UTC | 30 Sep 24 11:08 UTC |
	| start   | -p cert-expiration-231637                              | cert-expiration-231637   | jenkins | v1.34.0 | 30 Sep 24 11:08 UTC | 30 Sep 24 11:09 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-216994                               | force-systemd-env-216994 | jenkins | v1.34.0 | 30 Sep 24 11:09 UTC | 30 Sep 24 11:09 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-216994                            | force-systemd-env-216994 | jenkins | v1.34.0 | 30 Sep 24 11:09 UTC | 30 Sep 24 11:09 UTC |
	| start   | -p cert-options-557581                                 | cert-options-557581      | jenkins | v1.34.0 | 30 Sep 24 11:09 UTC | 30 Sep 24 11:09 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-557581 ssh                                | cert-options-557581      | jenkins | v1.34.0 | 30 Sep 24 11:09 UTC | 30 Sep 24 11:09 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-557581 -- sudo                         | cert-options-557581      | jenkins | v1.34.0 | 30 Sep 24 11:09 UTC | 30 Sep 24 11:09 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-557581                                 | cert-options-557581      | jenkins | v1.34.0 | 30 Sep 24 11:09 UTC | 30 Sep 24 11:09 UTC |
	| start   | -p old-k8s-version-852171                              | old-k8s-version-852171   | jenkins | v1.34.0 | 30 Sep 24 11:09 UTC | 30 Sep 24 11:12 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-231637                              | cert-expiration-231637   | jenkins | v1.34.0 | 30 Sep 24 11:12 UTC | 30 Sep 24 11:12 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-231637                              | cert-expiration-231637   | jenkins | v1.34.0 | 30 Sep 24 11:12 UTC | 30 Sep 24 11:12 UTC |
	| addons  | enable metrics-server -p old-k8s-version-852171        | old-k8s-version-852171   | jenkins | v1.34.0 | 30 Sep 24 11:12 UTC | 30 Sep 24 11:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| start   | -p no-preload-935352                                   | no-preload-935352        | jenkins | v1.34.0 | 30 Sep 24 11:12 UTC | 30 Sep 24 11:13 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-852171                              | old-k8s-version-852171   | jenkins | v1.34.0 | 30 Sep 24 11:12 UTC | 30 Sep 24 11:12 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-852171             | old-k8s-version-852171   | jenkins | v1.34.0 | 30 Sep 24 11:12 UTC | 30 Sep 24 11:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-852171                              | old-k8s-version-852171   | jenkins | v1.34.0 | 30 Sep 24 11:12 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-935352             | no-preload-935352        | jenkins | v1.34.0 | 30 Sep 24 11:13 UTC | 30 Sep 24 11:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-935352                                   | no-preload-935352        | jenkins | v1.34.0 | 30 Sep 24 11:13 UTC | 30 Sep 24 11:14 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-935352                  | no-preload-935352        | jenkins | v1.34.0 | 30 Sep 24 11:14 UTC | 30 Sep 24 11:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-935352                                   | no-preload-935352        | jenkins | v1.34.0 | 30 Sep 24 11:14 UTC | 30 Sep 24 11:18 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| image   | no-preload-935352 image list                           | no-preload-935352        | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --format=json                                          |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:14:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:14:05.419295 2753662 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:14:05.419491 2753662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:14:05.419504 2753662 out.go:358] Setting ErrFile to fd 2...
	I0930 11:14:05.419509 2753662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:14:05.419857 2753662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	I0930 11:14:05.420302 2753662 out.go:352] Setting JSON to false
	I0930 11:14:05.421450 2753662 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":154594,"bootTime":1727540252,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0930 11:14:05.421524 2753662 start.go:139] virtualization:  
	I0930 11:14:05.425780 2753662 out.go:177] * [no-preload-935352] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 11:14:05.427745 2753662 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:14:05.427823 2753662 notify.go:220] Checking for updates...
	I0930 11:14:05.432397 2753662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:14:05.434333 2753662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 11:14:05.436034 2753662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	I0930 11:14:05.437890 2753662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 11:14:05.441398 2753662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:14:05.443910 2753662 config.go:182] Loaded profile config "no-preload-935352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 11:14:05.444549 2753662 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:14:05.470495 2753662 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 11:14:05.470605 2753662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 11:14:05.550279 2753662 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 11:14:05.540122802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 11:14:05.550406 2753662 docker.go:318] overlay module found
	I0930 11:14:05.552447 2753662 out.go:177] * Using the docker driver based on existing profile
	I0930 11:14:05.553945 2753662 start.go:297] selected driver: docker
	I0930 11:14:05.553959 2753662 start.go:901] validating driver "docker" against &{Name:no-preload-935352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-935352 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:14:05.554076 2753662 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:14:05.554815 2753662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 11:14:05.609717 2753662 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 11:14:05.599478586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 11:14:05.610286 2753662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:14:05.610317 2753662 cni.go:84] Creating CNI manager for ""
	I0930 11:14:05.610373 2753662 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0930 11:14:05.610424 2753662 start.go:340] cluster config:
	{Name:no-preload-935352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-935352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:14:05.612563 2753662 out.go:177] * Starting "no-preload-935352" primary control-plane node in "no-preload-935352" cluster
	I0930 11:14:05.614249 2753662 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0930 11:14:05.615941 2753662 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0930 11:14:05.617743 2753662 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0930 11:14:05.617860 2753662 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0930 11:14:05.617946 2753662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/config.json ...
	I0930 11:14:05.618352 2753662 cache.go:107] acquiring lock: {Name:mk919581105efeca9e6610bbc5b191512ed480dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:14:05.618441 2753662 cache.go:115] /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0930 11:14:05.618454 2753662 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.714µs
	I0930 11:14:05.618462 2753662 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0930 11:14:05.618479 2753662 cache.go:107] acquiring lock: {Name:mkb622d7d9363e25c61f3852c1bd83fb0418207c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:14:05.618511 2753662 cache.go:115] /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0930 11:14:05.618521 2753662 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 49.329µs
	I0930 11:14:05.618528 2753662 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0930 11:14:05.618537 2753662 cache.go:107] acquiring lock: {Name:mk5b1a83c6e8c397636692be7b7c8d661db13b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:14:05.618570 2753662 cache.go:115] /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0930 11:14:05.618578 2753662 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 41.773µs
	I0930 11:14:05.618587 2753662 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0930 11:14:05.618596 2753662 cache.go:107] acquiring lock: {Name:mk32558c522399986352426bd8412d78e2c202cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:14:05.618620 2753662 cache.go:115] /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0930 11:14:05.618625 2753662 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 30.383µs
	I0930 11:14:05.618630 2753662 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0930 11:14:05.618671 2753662 cache.go:107] acquiring lock: {Name:mk45a58bc0ac1e3acfc198fc7cac03602f4178fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:14:05.618706 2753662 cache.go:115] /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0930 11:14:05.618711 2753662 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 41.428µs
	I0930 11:14:05.618719 2753662 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0930 11:14:05.618728 2753662 cache.go:107] acquiring lock: {Name:mkb7061838d85fae5dcfe82d698904349ab5f9c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:14:05.618761 2753662 cache.go:115] /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0930 11:14:05.618771 2753662 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 43.626µs
	I0930 11:14:05.618777 2753662 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0930 11:14:05.618787 2753662 cache.go:107] acquiring lock: {Name:mk138fcf67dee5fafe61536986a6c237e7d212d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:14:05.618820 2753662 cache.go:115] /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0930 11:14:05.618829 2753662 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 43.857µs
	I0930 11:14:05.618835 2753662 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0930 11:14:05.618843 2753662 cache.go:107] acquiring lock: {Name:mk212173b96424c54608523ef0fcb69c9a12c000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:14:05.618873 2753662 cache.go:115] /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0930 11:14:05.618882 2753662 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 39.294µs
	I0930 11:14:05.618888 2753662 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0930 11:14:05.618893 2753662 cache.go:87] Successfully saved all images to host disk.
	I0930 11:14:05.639870 2753662 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon, skipping pull
	I0930 11:14:05.639894 2753662 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in daemon, skipping load
	I0930 11:14:05.639913 2753662 cache.go:194] Successfully downloaded all kic artifacts
	I0930 11:14:05.639945 2753662 start.go:360] acquireMachinesLock for no-preload-935352: {Name:mkd6a7480cc679d97937bd1bd94cab8f5d1ac813 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:14:05.640003 2753662 start.go:364] duration metric: took 35.873µs to acquireMachinesLock for "no-preload-935352"
	I0930 11:14:05.640026 2753662 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:14:05.640035 2753662 fix.go:54] fixHost starting: 
	I0930 11:14:05.640311 2753662 cli_runner.go:164] Run: docker container inspect no-preload-935352 --format={{.State.Status}}
	I0930 11:14:05.657183 2753662 fix.go:112] recreateIfNeeded on no-preload-935352: state=Stopped err=<nil>
	W0930 11:14:05.657213 2753662 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:14:05.659196 2753662 out.go:177] * Restarting existing docker container for "no-preload-935352" ...
	I0930 11:14:06.403964 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:08.405089 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:05.660788 2753662 cli_runner.go:164] Run: docker start no-preload-935352
	I0930 11:14:06.014057 2753662 cli_runner.go:164] Run: docker container inspect no-preload-935352 --format={{.State.Status}}
	I0930 11:14:06.036811 2753662 kic.go:430] container "no-preload-935352" state is running.
	I0930 11:14:06.037233 2753662 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-935352
	I0930 11:14:06.065010 2753662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/config.json ...
	I0930 11:14:06.065369 2753662 machine.go:93] provisionDockerMachine start ...
	I0930 11:14:06.065482 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:06.091827 2753662 main.go:141] libmachine: Using SSH client type: native
	I0930 11:14:06.092092 2753662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41603 <nil> <nil>}
	I0930 11:14:06.092101 2753662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:14:06.093025 2753662 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0930 11:14:09.223316 2753662 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-935352
	
	I0930 11:14:09.223339 2753662 ubuntu.go:169] provisioning hostname "no-preload-935352"
	I0930 11:14:09.223426 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:09.246919 2753662 main.go:141] libmachine: Using SSH client type: native
	I0930 11:14:09.247282 2753662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41603 <nil> <nil>}
	I0930 11:14:09.247298 2753662 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-935352 && echo "no-preload-935352" | sudo tee /etc/hostname
	I0930 11:14:09.393973 2753662 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-935352
	
	I0930 11:14:09.394116 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:09.413637 2753662 main.go:141] libmachine: Using SSH client type: native
	I0930 11:14:09.413883 2753662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41603 <nil> <nil>}
	I0930 11:14:09.413905 2753662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-935352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-935352/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-935352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:14:09.543901 2753662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:14:09.543930 2753662 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19734-2538756/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-2538756/.minikube}
	I0930 11:14:09.543954 2753662 ubuntu.go:177] setting up certificates
	I0930 11:14:09.543964 2753662 provision.go:84] configureAuth start
	I0930 11:14:09.544023 2753662 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-935352
	I0930 11:14:09.561000 2753662 provision.go:143] copyHostCerts
	I0930 11:14:09.561074 2753662 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.pem, removing ...
	I0930 11:14:09.561096 2753662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.pem
	I0930 11:14:09.561180 2753662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.pem (1078 bytes)
	I0930 11:14:09.561285 2753662 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-2538756/.minikube/cert.pem, removing ...
	I0930 11:14:09.561297 2753662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-2538756/.minikube/cert.pem
	I0930 11:14:09.561324 2753662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-2538756/.minikube/cert.pem (1123 bytes)
	I0930 11:14:09.561385 2753662 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-2538756/.minikube/key.pem, removing ...
	I0930 11:14:09.561394 2753662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-2538756/.minikube/key.pem
	I0930 11:14:09.561420 2753662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-2538756/.minikube/key.pem (1679 bytes)
	I0930 11:14:09.561472 2753662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca-key.pem org=jenkins.no-preload-935352 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-935352]
	I0930 11:14:10.510675 2753662 provision.go:177] copyRemoteCerts
	I0930 11:14:10.510754 2753662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:14:10.510802 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:10.528172 2753662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41603 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/no-preload-935352/id_rsa Username:docker}
	I0930 11:14:10.620444 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0930 11:14:10.646541 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0930 11:14:10.671737 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:14:10.697832 2753662 provision.go:87] duration metric: took 1.153844166s to configureAuth
	I0930 11:14:10.697868 2753662 ubuntu.go:193] setting minikube options for container-runtime
	I0930 11:14:10.698068 2753662 config.go:182] Loaded profile config "no-preload-935352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 11:14:10.698084 2753662 machine.go:96] duration metric: took 4.632701331s to provisionDockerMachine
	I0930 11:14:10.698093 2753662 start.go:293] postStartSetup for "no-preload-935352" (driver="docker")
	I0930 11:14:10.698108 2753662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:14:10.698165 2753662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:14:10.698219 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:10.715414 2753662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41603 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/no-preload-935352/id_rsa Username:docker}
	I0930 11:14:10.809008 2753662 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:14:10.812417 2753662 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0930 11:14:10.812456 2753662 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0930 11:14:10.812469 2753662 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0930 11:14:10.812476 2753662 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0930 11:14:10.812494 2753662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-2538756/.minikube/addons for local assets ...
	I0930 11:14:10.812555 2753662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-2538756/.minikube/files for local assets ...
	I0930 11:14:10.812638 2753662 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-2538756/.minikube/files/etc/ssl/certs/25441572.pem -> 25441572.pem in /etc/ssl/certs
	I0930 11:14:10.812752 2753662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:14:10.823977 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/files/etc/ssl/certs/25441572.pem --> /etc/ssl/certs/25441572.pem (1708 bytes)
	I0930 11:14:10.854935 2753662 start.go:296] duration metric: took 156.822439ms for postStartSetup
	I0930 11:14:10.855030 2753662 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 11:14:10.855071 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:10.871808 2753662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41603 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/no-preload-935352/id_rsa Username:docker}
	I0930 11:14:10.961366 2753662 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0930 11:14:10.965760 2753662 fix.go:56] duration metric: took 5.325715861s for fixHost
	I0930 11:14:10.965792 2753662 start.go:83] releasing machines lock for "no-preload-935352", held for 5.325770448s
	I0930 11:14:10.965877 2753662 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-935352
	I0930 11:14:10.982695 2753662 ssh_runner.go:195] Run: cat /version.json
	I0930 11:14:10.982758 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:10.983000 2753662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:14:10.983185 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:11.006269 2753662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41603 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/no-preload-935352/id_rsa Username:docker}
	I0930 11:14:11.015733 2753662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41603 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/no-preload-935352/id_rsa Username:docker}
	I0930 11:14:11.246294 2753662 ssh_runner.go:195] Run: systemctl --version
	I0930 11:14:11.250820 2753662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 11:14:11.256107 2753662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0930 11:14:11.274112 2753662 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0930 11:14:11.274208 2753662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:14:11.283580 2753662 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 11:14:11.283638 2753662 start.go:495] detecting cgroup driver to use...
	I0930 11:14:11.283677 2753662 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 11:14:11.283731 2753662 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0930 11:14:11.297142 2753662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 11:14:11.308845 2753662 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:14:11.308959 2753662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:14:11.322786 2753662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:14:11.334905 2753662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:14:11.463493 2753662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:14:11.559298 2753662 docker.go:233] disabling docker service ...
	I0930 11:14:11.559384 2753662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:14:11.573427 2753662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:14:11.585502 2753662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:14:11.696343 2753662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:14:11.801075 2753662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:14:11.813732 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:14:11.833137 2753662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0930 11:14:11.843852 2753662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0930 11:14:11.855106 2753662 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0930 11:14:11.855236 2753662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0930 11:14:11.866303 2753662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 11:14:11.876647 2753662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0930 11:14:11.887252 2753662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 11:14:11.898320 2753662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:14:11.909899 2753662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0930 11:14:11.920346 2753662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0930 11:14:11.931100 2753662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0930 11:14:11.941393 2753662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:14:11.950714 2753662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:14:11.959411 2753662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:14:12.060074 2753662 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0930 11:14:12.230916 2753662 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0930 11:14:12.231027 2753662 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0930 11:14:12.235288 2753662 start.go:563] Will wait 60s for crictl version
	I0930 11:14:12.235383 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:14:12.239148 2753662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:14:12.285639 2753662 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0930 11:14:12.285748 2753662 ssh_runner.go:195] Run: containerd --version
	I0930 11:14:12.309282 2753662 ssh_runner.go:195] Run: containerd --version
	I0930 11:14:12.337038 2753662 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0930 11:14:12.338917 2753662 cli_runner.go:164] Run: docker network inspect no-preload-935352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 11:14:12.354650 2753662 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0930 11:14:12.358641 2753662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:14:12.369863 2753662 kubeadm.go:883] updating cluster {Name:no-preload-935352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-935352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:14:12.369984 2753662 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0930 11:14:12.370035 2753662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:14:12.413536 2753662 containerd.go:627] all images are preloaded for containerd runtime.
	I0930 11:14:12.413559 2753662 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:14:12.413568 2753662 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0930 11:14:12.413682 2753662 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-935352 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-935352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:14:12.413761 2753662 ssh_runner.go:195] Run: sudo crictl info
	I0930 11:14:12.458040 2753662 cni.go:84] Creating CNI manager for ""
	I0930 11:14:12.458068 2753662 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0930 11:14:12.458088 2753662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:14:12.458131 2753662 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-935352 NodeName:no-preload-935352 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:14:12.458305 2753662 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-935352"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:14:12.458380 2753662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:14:12.468129 2753662 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:14:12.468204 2753662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 11:14:12.476912 2753662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0930 11:14:12.500397 2753662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:14:12.520439 2753662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0930 11:14:12.539469 2753662 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0930 11:14:12.543109 2753662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:14:12.553752 2753662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:14:12.642173 2753662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:14:12.658744 2753662 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352 for IP: 192.168.85.2
	I0930 11:14:12.658819 2753662 certs.go:194] generating shared ca certs ...
	I0930 11:14:12.658852 2753662 certs.go:226] acquiring lock for ca certs: {Name:mkff6faeb681279e5ac456a1e9fb9c9dcac2d430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:14:12.659055 2753662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.key
	I0930 11:14:12.659127 2753662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.key
	I0930 11:14:12.659159 2753662 certs.go:256] generating profile certs ...
	I0930 11:14:12.659273 2753662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.key
	I0930 11:14:12.659372 2753662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/apiserver.key.01740ef5
	I0930 11:14:12.659437 2753662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/proxy-client.key
	I0930 11:14:12.659577 2753662 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/2544157.pem (1338 bytes)
	W0930 11:14:12.659725 2753662 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/2544157_empty.pem, impossibly tiny 0 bytes
	I0930 11:14:12.659756 2753662 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 11:14:12.659812 2753662 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/ca.pem (1078 bytes)
	I0930 11:14:12.659858 2753662 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:14:12.659920 2753662 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/key.pem (1679 bytes)
	I0930 11:14:12.659992 2753662 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2538756/.minikube/files/etc/ssl/certs/25441572.pem (1708 bytes)
	I0930 11:14:12.660669 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:14:12.693255 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 11:14:12.717908 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:14:12.746443 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:14:12.771793 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 11:14:12.797387 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:14:12.861512 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:14:12.896459 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 11:14:12.928082 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:14:12.969521 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/certs/2544157.pem --> /usr/share/ca-certificates/2544157.pem (1338 bytes)
	I0930 11:14:12.996414 2753662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2538756/.minikube/files/etc/ssl/certs/25441572.pem --> /usr/share/ca-certificates/25441572.pem (1708 bytes)
	I0930 11:14:13.026927 2753662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:14:13.045492 2753662 ssh_runner.go:195] Run: openssl version
	I0930 11:14:13.053728 2753662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:14:13.063581 2753662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:14:13.067337 2753662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:14:13.067411 2753662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:14:13.074958 2753662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:14:13.084106 2753662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2544157.pem && ln -fs /usr/share/ca-certificates/2544157.pem /etc/ssl/certs/2544157.pem"
	I0930 11:14:13.093663 2753662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2544157.pem
	I0930 11:14:13.097236 2753662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 10:35 /usr/share/ca-certificates/2544157.pem
	I0930 11:14:13.097297 2753662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2544157.pem
	I0930 11:14:13.104415 2753662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2544157.pem /etc/ssl/certs/51391683.0"
	I0930 11:14:13.114161 2753662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25441572.pem && ln -fs /usr/share/ca-certificates/25441572.pem /etc/ssl/certs/25441572.pem"
	I0930 11:14:13.123758 2753662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25441572.pem
	I0930 11:14:13.127139 2753662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 10:35 /usr/share/ca-certificates/25441572.pem
	I0930 11:14:13.127228 2753662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25441572.pem
	I0930 11:14:13.134418 2753662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25441572.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:14:13.143482 2753662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:14:13.147071 2753662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:14:13.155448 2753662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:14:13.163008 2753662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:14:13.170307 2753662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:14:13.177232 2753662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:14:13.184381 2753662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:14:13.191323 2753662 kubeadm.go:392] StartCluster: {Name:no-preload-935352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-935352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:14:13.191419 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0930 11:14:13.191496 2753662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:14:13.230764 2753662 cri.go:89] found id: "30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec"
	I0930 11:14:13.230788 2753662 cri.go:89] found id: "809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396"
	I0930 11:14:13.230793 2753662 cri.go:89] found id: "af1e99e04df0f0c24bcbe7d7cb005203118dade8e84f48c7f2e33d3e80663022"
	I0930 11:14:13.230830 2753662 cri.go:89] found id: "7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21"
	I0930 11:14:13.230841 2753662 cri.go:89] found id: "7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a"
	I0930 11:14:13.230845 2753662 cri.go:89] found id: "08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67"
	I0930 11:14:13.230848 2753662 cri.go:89] found id: "b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0"
	I0930 11:14:13.230851 2753662 cri.go:89] found id: "0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5"
	I0930 11:14:13.230855 2753662 cri.go:89] found id: ""
	I0930 11:14:13.230923 2753662 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0930 11:14:13.243974 2753662 cri.go:116] JSON = null
	W0930 11:14:13.244082 2753662 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0930 11:14:13.244175 2753662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 11:14:13.254974 2753662 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 11:14:13.254995 2753662 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 11:14:13.255056 2753662 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 11:14:13.265390 2753662 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 11:14:13.266066 2753662 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-935352" does not appear in /home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 11:14:13.266413 2753662 kubeconfig.go:62] /home/jenkins/minikube-integration/19734-2538756/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-935352" cluster setting kubeconfig missing "no-preload-935352" context setting]
	I0930 11:14:13.266895 2753662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/kubeconfig: {Name:mk7f607d1d45d210ea4523c0a214397b48972e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:14:13.268415 2753662 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 11:14:13.281918 2753662 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0930 11:14:13.282003 2753662 kubeadm.go:597] duration metric: took 26.999671ms to restartPrimaryControlPlane
	I0930 11:14:13.282027 2753662 kubeadm.go:394] duration metric: took 90.732102ms to StartCluster
	I0930 11:14:13.282067 2753662 settings.go:142] acquiring lock: {Name:mkc704d8ddfae8fa577b296109d2f74f59988133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:14:13.282154 2753662 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 11:14:13.283145 2753662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2538756/kubeconfig: {Name:mk7f607d1d45d210ea4523c0a214397b48972e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:14:13.283402 2753662 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0930 11:14:13.284139 2753662 config.go:182] Loaded profile config "no-preload-935352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 11:14:13.284077 2753662 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 11:14:13.284332 2753662 addons.go:69] Setting storage-provisioner=true in profile "no-preload-935352"
	I0930 11:14:13.284387 2753662 addons.go:69] Setting default-storageclass=true in profile "no-preload-935352"
	I0930 11:14:13.284403 2753662 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-935352"
	I0930 11:14:13.284701 2753662 cli_runner.go:164] Run: docker container inspect no-preload-935352 --format={{.State.Status}}
	I0930 11:14:13.284866 2753662 addons.go:234] Setting addon storage-provisioner=true in "no-preload-935352"
	W0930 11:14:13.284897 2753662 addons.go:243] addon storage-provisioner should already be in state true
	I0930 11:14:13.284950 2753662 host.go:66] Checking if "no-preload-935352" exists ...
	I0930 11:14:13.285216 2753662 addons.go:69] Setting dashboard=true in profile "no-preload-935352"
	I0930 11:14:13.285244 2753662 addons.go:234] Setting addon dashboard=true in "no-preload-935352"
	W0930 11:14:13.285252 2753662 addons.go:243] addon dashboard should already be in state true
	I0930 11:14:13.285281 2753662 host.go:66] Checking if "no-preload-935352" exists ...
	I0930 11:14:13.285575 2753662 cli_runner.go:164] Run: docker container inspect no-preload-935352 --format={{.State.Status}}
	I0930 11:14:13.285712 2753662 cli_runner.go:164] Run: docker container inspect no-preload-935352 --format={{.State.Status}}
	I0930 11:14:13.290338 2753662 addons.go:69] Setting metrics-server=true in profile "no-preload-935352"
	I0930 11:14:13.290364 2753662 addons.go:234] Setting addon metrics-server=true in "no-preload-935352"
	W0930 11:14:13.290373 2753662 addons.go:243] addon metrics-server should already be in state true
	I0930 11:14:13.290402 2753662 host.go:66] Checking if "no-preload-935352" exists ...
	I0930 11:14:13.290863 2753662 cli_runner.go:164] Run: docker container inspect no-preload-935352 --format={{.State.Status}}
	I0930 11:14:13.291667 2753662 out.go:177] * Verifying Kubernetes components...
	I0930 11:14:13.300575 2753662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:14:13.359093 2753662 addons.go:234] Setting addon default-storageclass=true in "no-preload-935352"
	W0930 11:14:13.359115 2753662 addons.go:243] addon default-storageclass should already be in state true
	I0930 11:14:13.359139 2753662 host.go:66] Checking if "no-preload-935352" exists ...
	I0930 11:14:13.359557 2753662 cli_runner.go:164] Run: docker container inspect no-preload-935352 --format={{.State.Status}}
	I0930 11:14:13.374676 2753662 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 11:14:13.374846 2753662 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0930 11:14:13.377085 2753662 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 11:14:13.377113 2753662 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 11:14:13.377180 2753662 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:14:13.377204 2753662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 11:14:13.377182 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:13.377250 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:13.390004 2753662 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0930 11:14:13.391706 2753662 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0930 11:14:10.905424 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:12.905460 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:13.393575 2753662 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0930 11:14:13.393598 2753662 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0930 11:14:13.393673 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:13.438988 2753662 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 11:14:13.439014 2753662 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 11:14:13.439077 2753662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-935352
	I0930 11:14:13.447010 2753662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41603 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/no-preload-935352/id_rsa Username:docker}
	I0930 11:14:13.454493 2753662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41603 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/no-preload-935352/id_rsa Username:docker}
	I0930 11:14:13.468501 2753662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41603 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/no-preload-935352/id_rsa Username:docker}
	I0930 11:14:13.481908 2753662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41603 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/no-preload-935352/id_rsa Username:docker}
	I0930 11:14:13.561467 2753662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:14:13.691452 2753662 node_ready.go:35] waiting up to 6m0s for node "no-preload-935352" to be "Ready" ...
	I0930 11:14:13.702126 2753662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 11:14:13.834059 2753662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:14:13.878480 2753662 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 11:14:13.878505 2753662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0930 11:14:13.888796 2753662 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0930 11:14:13.888821 2753662 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0930 11:14:14.006007 2753662 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 11:14:14.006041 2753662 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 11:14:14.067437 2753662 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0930 11:14:14.067464 2753662 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0930 11:14:14.101162 2753662 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0930 11:14:14.101203 2753662 retry.go:31] will retry after 313.466927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0930 11:14:14.174748 2753662 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0930 11:14:14.174783 2753662 retry.go:31] will retry after 234.9278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0930 11:14:14.182463 2753662 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 11:14:14.182491 2753662 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 11:14:14.186276 2753662 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0930 11:14:14.186300 2753662 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0930 11:14:14.240151 2753662 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0930 11:14:14.240176 2753662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0930 11:14:14.264121 2753662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 11:14:14.369780 2753662 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0930 11:14:14.369809 2753662 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0930 11:14:14.410115 2753662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:14:14.414823 2753662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0930 11:14:14.518327 2753662 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0930 11:14:14.518352 2753662 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0930 11:14:14.711719 2753662 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0930 11:14:14.711748 2753662 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0930 11:14:14.883978 2753662 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0930 11:14:14.884005 2753662 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0930 11:14:14.977152 2753662 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0930 11:14:14.977178 2753662 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0930 11:14:15.028188 2753662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0930 11:14:15.426479 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:17.904956 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:18.524928 2753662 node_ready.go:49] node "no-preload-935352" has status "Ready":"True"
	I0930 11:14:18.524962 2753662 node_ready.go:38] duration metric: took 4.833309925s for node "no-preload-935352" to be "Ready" ...
	I0930 11:14:18.524974 2753662 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:14:18.562543 2753662 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4bgp8" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:18.571769 2753662 pod_ready.go:93] pod "coredns-7c65d6cfc9-4bgp8" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:18.571794 2753662 pod_ready.go:82] duration metric: took 9.209449ms for pod "coredns-7c65d6cfc9-4bgp8" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:18.571805 2753662 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-935352" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:18.578625 2753662 pod_ready.go:93] pod "etcd-no-preload-935352" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:18.578654 2753662 pod_ready.go:82] duration metric: took 6.841253ms for pod "etcd-no-preload-935352" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:18.578668 2753662 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-935352" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:18.584839 2753662 pod_ready.go:93] pod "kube-apiserver-no-preload-935352" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:18.584862 2753662 pod_ready.go:82] duration metric: took 6.165949ms for pod "kube-apiserver-no-preload-935352" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:18.584873 2753662 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-935352" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:18.592058 2753662 pod_ready.go:93] pod "kube-controller-manager-no-preload-935352" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:18.592081 2753662 pod_ready.go:82] duration metric: took 7.198084ms for pod "kube-controller-manager-no-preload-935352" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:18.592095 2753662 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cjbdj" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:18.727901 2753662 pod_ready.go:93] pod "kube-proxy-cjbdj" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:18.727926 2753662 pod_ready.go:82] duration metric: took 135.823384ms for pod "kube-proxy-cjbdj" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:18.727938 2753662 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-935352" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:20.740431 2753662 pod_ready.go:103] pod "kube-scheduler-no-preload-935352" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:21.225363 2753662 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.815194424s)
	I0930 11:14:21.225416 2753662 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.961265111s)
	I0930 11:14:21.225424 2753662 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.810570081s)
	I0930 11:14:21.225435 2753662 addons.go:475] Verifying addon metrics-server=true in "no-preload-935352"
	I0930 11:14:21.293796 2753662 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.265552161s)
	I0930 11:14:21.295787 2753662 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-935352 addons enable metrics-server
	
	I0930 11:14:21.297718 2753662 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0930 11:14:20.406141 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:22.903534 2748394 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:23.405703 2748394 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:23.405737 2748394 pod_ready.go:82] duration metric: took 1m9.008109425s for pod "kube-controller-manager-old-k8s-version-852171" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:23.405750 2748394 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxvn5" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:23.411347 2748394 pod_ready.go:93] pod "kube-proxy-kxvn5" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:23.411374 2748394 pod_ready.go:82] duration metric: took 5.616076ms for pod "kube-proxy-kxvn5" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:23.411392 2748394 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:21.300908 2753662 addons.go:510] duration metric: took 8.016832117s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0930 11:14:23.233401 2753662 pod_ready.go:103] pod "kube-scheduler-no-preload-935352" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:25.233954 2753662 pod_ready.go:103] pod "kube-scheduler-no-preload-935352" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:25.420522 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:27.917173 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:29.918214 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:27.235238 2753662 pod_ready.go:103] pod "kube-scheduler-no-preload-935352" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:29.735153 2753662 pod_ready.go:103] pod "kube-scheduler-no-preload-935352" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:31.921230 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:34.417899 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:31.736361 2753662 pod_ready.go:103] pod "kube-scheduler-no-preload-935352" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:32.234567 2753662 pod_ready.go:93] pod "kube-scheduler-no-preload-935352" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:32.234595 2753662 pod_ready.go:82] duration metric: took 13.506648547s for pod "kube-scheduler-no-preload-935352" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.234609 2753662 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.241652 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:36.917779 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:38.918074 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:36.242140 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:38.740840 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:41.417776 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:43.917076 2748394 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:44.417984 2748394 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:44.418012 2748394 pod_ready.go:82] duration metric: took 21.006611311s for pod "kube-scheduler-old-k8s-version-852171" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:44.418024 2748394 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:40.740985 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:43.241005 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:45.242497 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:46.423144 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:48.424072 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:47.739999 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:49.741605 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:50.424451 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:52.424626 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:54.924126 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:52.241702 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:54.741160 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:56.924181 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:59.424088 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:56.741541 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:14:59.241301 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:01.425237 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:03.925218 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:01.242110 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:03.741523 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:06.424233 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:08.425749 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:06.240897 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:08.241592 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:10.923868 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:12.924116 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:10.741345 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:13.240203 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:15.425208 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:17.924777 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:15.740482 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:17.741193 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:20.240410 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:20.425696 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:22.923761 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:24.924125 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:22.741513 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:25.241488 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:27.425113 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:29.924057 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:27.741699 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:30.240853 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:31.924187 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:34.426183 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:32.241994 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:34.242082 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:36.924440 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:39.424402 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:36.743752 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:39.241089 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:41.924854 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:44.425247 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:41.241516 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:43.242214 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:45.262621 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:46.924157 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:48.924885 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:47.740514 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:49.741383 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:51.424900 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:53.926245 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:51.741874 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:54.242294 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:56.424530 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:58.924578 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:56.743054 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:15:59.240611 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:01.424551 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:03.424601 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:01.741527 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:03.741559 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:05.425106 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:07.425445 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:09.924554 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:06.239694 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:08.240789 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:10.240841 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:11.934902 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:14.424332 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:12.241830 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:14.741114 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:16.424431 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:18.424980 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:17.241349 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:19.241579 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:20.924837 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:23.424418 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:21.741339 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:24.240225 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:25.424858 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:27.923876 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:29.924751 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:26.241787 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:28.740690 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:32.423819 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:34.424200 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:30.740727 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:32.741483 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:34.743348 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:36.424886 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:38.425202 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:37.240668 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:39.242326 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:40.923754 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:43.423383 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:41.740795 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:43.741747 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:45.427788 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:47.923761 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:46.240857 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:48.743207 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:50.424019 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:52.424796 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:51.240713 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:53.241397 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:55.019233 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:57.424822 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:59.923898 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:55.741281 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:16:58.241399 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:00.255475 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:01.925374 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:03.926217 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:02.741767 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:04.743824 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:06.424675 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:08.425276 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:07.241164 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:09.741601 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:10.928976 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:13.424736 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:12.240773 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:14.240813 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:15.424947 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:17.924977 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:16.740882 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:19.240942 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:20.424782 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:22.932296 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:21.743096 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:24.240358 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:25.424174 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:27.424416 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:29.425018 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:26.241019 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:28.747546 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:31.924081 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:33.924510 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:31.240326 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:33.241308 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:35.927130 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:38.424967 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:35.740785 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:38.240762 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:40.425307 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:42.924148 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:40.742793 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:43.240413 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:45.244432 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:45.521204 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:47.923758 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:49.924104 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:47.741005 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:49.741819 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:51.925007 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:54.427003 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:52.240815 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:54.740884 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:56.923590 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:58.924034 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:56.741006 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:17:58.741343 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:01.424345 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:03.424683 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:01.240589 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:03.240876 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:05.424880 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:07.924618 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:05.740582 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:08.245007 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:10.423907 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:12.924252 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:14.924372 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:10.741323 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:13.240314 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:15.241296 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:17.425678 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:19.924542 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:17.242441 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:19.741094 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:21.924726 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:23.928413 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:21.741228 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:24.240991 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:26.423641 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:28.424571 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:26.740907 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:28.741483 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:30.425349 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:32.925641 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:30.741742 2753662 pod_ready.go:103] pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:32.241240 2753662 pod_ready.go:82] duration metric: took 4m0.006615293s for pod "metrics-server-6867b74b74-qw45l" in "kube-system" namespace to be "Ready" ...
	E0930 11:18:32.241269 2753662 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 11:18:32.241278 2753662 pod_ready.go:39] duration metric: took 4m13.716255259s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:18:32.241292 2753662 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:18:32.241322 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0930 11:18:32.241385 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 11:18:32.286452 2753662 cri.go:89] found id: "d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351"
	I0930 11:18:32.286530 2753662 cri.go:89] found id: "b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0"
	I0930 11:18:32.286549 2753662 cri.go:89] found id: ""
	I0930 11:18:32.286575 2753662 logs.go:276] 2 containers: [d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351 b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0]
	I0930 11:18:32.286681 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.290457 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.293989 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0930 11:18:32.294062 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 11:18:32.335926 2753662 cri.go:89] found id: "5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac"
	I0930 11:18:32.335953 2753662 cri.go:89] found id: "7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a"
	I0930 11:18:32.335959 2753662 cri.go:89] found id: ""
	I0930 11:18:32.335966 2753662 logs.go:276] 2 containers: [5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac 7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a]
	I0930 11:18:32.336022 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.339738 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.343427 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0930 11:18:32.343502 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 11:18:32.382045 2753662 cri.go:89] found id: "9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63"
	I0930 11:18:32.382126 2753662 cri.go:89] found id: "30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec"
	I0930 11:18:32.382145 2753662 cri.go:89] found id: ""
	I0930 11:18:32.382172 2753662 logs.go:276] 2 containers: [9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63 30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec]
	I0930 11:18:32.382258 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.386365 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.389797 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0930 11:18:32.389892 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 11:18:32.430062 2753662 cri.go:89] found id: "c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae"
	I0930 11:18:32.430094 2753662 cri.go:89] found id: "0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5"
	I0930 11:18:32.430099 2753662 cri.go:89] found id: ""
	I0930 11:18:32.430108 2753662 logs.go:276] 2 containers: [c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae 0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5]
	I0930 11:18:32.430163 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.433775 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.437307 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0930 11:18:32.437433 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 11:18:32.480375 2753662 cri.go:89] found id: "130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75"
	I0930 11:18:32.480394 2753662 cri.go:89] found id: "7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21"
	I0930 11:18:32.480400 2753662 cri.go:89] found id: ""
	I0930 11:18:32.480406 2753662 logs.go:276] 2 containers: [130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75 7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21]
	I0930 11:18:32.480468 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.484463 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.487939 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 11:18:32.488016 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 11:18:32.528383 2753662 cri.go:89] found id: "f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029"
	I0930 11:18:32.528421 2753662 cri.go:89] found id: "08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67"
	I0930 11:18:32.528426 2753662 cri.go:89] found id: ""
	I0930 11:18:32.528436 2753662 logs.go:276] 2 containers: [f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029 08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67]
	I0930 11:18:32.528514 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.532468 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.536479 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0930 11:18:32.536559 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 11:18:32.584137 2753662 cri.go:89] found id: "103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017"
	I0930 11:18:32.584203 2753662 cri.go:89] found id: "809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396"
	I0930 11:18:32.584223 2753662 cri.go:89] found id: ""
	I0930 11:18:32.584244 2753662 logs.go:276] 2 containers: [103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017 809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396]
	I0930 11:18:32.584318 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.588027 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.591451 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 11:18:32.591571 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 11:18:32.631754 2753662 cri.go:89] found id: "d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1"
	I0930 11:18:32.631780 2753662 cri.go:89] found id: ""
	I0930 11:18:32.631788 2753662 logs.go:276] 1 containers: [d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1]
	I0930 11:18:32.631855 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.636155 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0930 11:18:32.636275 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 11:18:32.675518 2753662 cri.go:89] found id: "6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52"
	I0930 11:18:32.675542 2753662 cri.go:89] found id: "61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8"
	I0930 11:18:32.675547 2753662 cri.go:89] found id: ""
	I0930 11:18:32.675555 2753662 logs.go:276] 2 containers: [6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52 61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8]
	I0930 11:18:32.675704 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.679397 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:32.682750 2753662 logs.go:123] Gathering logs for kube-controller-manager [f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029] ...
	I0930 11:18:32.682774 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029"
	I0930 11:18:32.751913 2753662 logs.go:123] Gathering logs for kubernetes-dashboard [d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1] ...
	I0930 11:18:32.751953 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1"
	I0930 11:18:32.793811 2753662 logs.go:123] Gathering logs for containerd ...
	I0930 11:18:32.793838 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0930 11:18:32.863582 2753662 logs.go:123] Gathering logs for dmesg ...
	I0930 11:18:32.863657 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 11:18:32.881323 2753662 logs.go:123] Gathering logs for kube-apiserver [b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0] ...
	I0930 11:18:32.881406 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0"
	I0930 11:18:32.944624 2753662 logs.go:123] Gathering logs for kube-proxy [7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21] ...
	I0930 11:18:32.944657 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21"
	I0930 11:18:32.996296 2753662 logs.go:123] Gathering logs for kube-controller-manager [08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67] ...
	I0930 11:18:32.996370 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67"
	I0930 11:18:33.064087 2753662 logs.go:123] Gathering logs for etcd [5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac] ...
	I0930 11:18:33.064142 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac"
	I0930 11:18:33.116456 2753662 logs.go:123] Gathering logs for coredns [9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63] ...
	I0930 11:18:33.116492 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63"
	I0930 11:18:33.162546 2753662 logs.go:123] Gathering logs for kube-scheduler [c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae] ...
	I0930 11:18:33.162576 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae"
	I0930 11:18:33.201761 2753662 logs.go:123] Gathering logs for container status ...
	I0930 11:18:33.201792 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 11:18:33.247261 2753662 logs.go:123] Gathering logs for kubelet ...
	I0930 11:18:33.247344 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 11:18:33.329306 2753662 logs.go:123] Gathering logs for kube-proxy [130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75] ...
	I0930 11:18:33.329347 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75"
	I0930 11:18:33.376537 2753662 logs.go:123] Gathering logs for storage-provisioner [6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52] ...
	I0930 11:18:33.376571 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52"
	I0930 11:18:33.419511 2753662 logs.go:123] Gathering logs for coredns [30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec] ...
	I0930 11:18:33.419538 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec"
	I0930 11:18:33.464101 2753662 logs.go:123] Gathering logs for kube-scheduler [0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5] ...
	I0930 11:18:33.464136 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5"
	I0930 11:18:33.513759 2753662 logs.go:123] Gathering logs for kindnet [103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017] ...
	I0930 11:18:33.513833 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017"
	I0930 11:18:33.583125 2753662 logs.go:123] Gathering logs for kindnet [809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396] ...
	I0930 11:18:33.583159 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396"
	I0930 11:18:33.623098 2753662 logs.go:123] Gathering logs for storage-provisioner [61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8] ...
	I0930 11:18:33.623128 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8"
	I0930 11:18:33.663370 2753662 logs.go:123] Gathering logs for describe nodes ...
	I0930 11:18:33.663400 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 11:18:33.840238 2753662 logs.go:123] Gathering logs for kube-apiserver [d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351] ...
	I0930 11:18:33.840271 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351"
	I0930 11:18:33.918833 2753662 logs.go:123] Gathering logs for etcd [7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a] ...
	I0930 11:18:33.918883 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a"
	I0930 11:18:35.425447 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:37.426126 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:39.924012 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:36.493331 2753662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:18:36.506425 2753662 api_server.go:72] duration metric: took 4m23.222951374s to wait for apiserver process to appear ...
	I0930 11:18:36.506453 2753662 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:18:36.506491 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0930 11:18:36.506553 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 11:18:36.545397 2753662 cri.go:89] found id: "d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351"
	I0930 11:18:36.545427 2753662 cri.go:89] found id: "b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0"
	I0930 11:18:36.545433 2753662 cri.go:89] found id: ""
	I0930 11:18:36.545441 2753662 logs.go:276] 2 containers: [d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351 b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0]
	I0930 11:18:36.545511 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.549844 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.553936 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0930 11:18:36.554025 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 11:18:36.603476 2753662 cri.go:89] found id: "5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac"
	I0930 11:18:36.603513 2753662 cri.go:89] found id: "7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a"
	I0930 11:18:36.603520 2753662 cri.go:89] found id: ""
	I0930 11:18:36.603528 2753662 logs.go:276] 2 containers: [5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac 7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a]
	I0930 11:18:36.603583 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.607821 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.611721 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0930 11:18:36.611798 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 11:18:36.666460 2753662 cri.go:89] found id: "9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63"
	I0930 11:18:36.666538 2753662 cri.go:89] found id: "30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec"
	I0930 11:18:36.666558 2753662 cri.go:89] found id: ""
	I0930 11:18:36.666573 2753662 logs.go:276] 2 containers: [9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63 30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec]
	I0930 11:18:36.666644 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.679052 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.683518 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0930 11:18:36.683682 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 11:18:36.722450 2753662 cri.go:89] found id: "c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae"
	I0930 11:18:36.722473 2753662 cri.go:89] found id: "0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5"
	I0930 11:18:36.722481 2753662 cri.go:89] found id: ""
	I0930 11:18:36.722489 2753662 logs.go:276] 2 containers: [c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae 0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5]
	I0930 11:18:36.722573 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.726390 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.730015 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0930 11:18:36.730093 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 11:18:36.771668 2753662 cri.go:89] found id: "130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75"
	I0930 11:18:36.771692 2753662 cri.go:89] found id: "7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21"
	I0930 11:18:36.771697 2753662 cri.go:89] found id: ""
	I0930 11:18:36.771705 2753662 logs.go:276] 2 containers: [130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75 7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21]
	I0930 11:18:36.771758 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.775361 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.778662 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 11:18:36.778734 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 11:18:36.816608 2753662 cri.go:89] found id: "f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029"
	I0930 11:18:36.816634 2753662 cri.go:89] found id: "08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67"
	I0930 11:18:36.816639 2753662 cri.go:89] found id: ""
	I0930 11:18:36.816647 2753662 logs.go:276] 2 containers: [f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029 08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67]
	I0930 11:18:36.816702 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.821460 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.825451 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0930 11:18:36.825534 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 11:18:36.881019 2753662 cri.go:89] found id: "103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017"
	I0930 11:18:36.881047 2753662 cri.go:89] found id: "809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396"
	I0930 11:18:36.881053 2753662 cri.go:89] found id: ""
	I0930 11:18:36.881070 2753662 logs.go:276] 2 containers: [103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017 809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396]
	I0930 11:18:36.881131 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.886067 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.889807 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 11:18:36.889886 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 11:18:36.946390 2753662 cri.go:89] found id: "d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1"
	I0930 11:18:36.946419 2753662 cri.go:89] found id: ""
	I0930 11:18:36.946427 2753662 logs.go:276] 1 containers: [d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1]
	I0930 11:18:36.946488 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.950410 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0930 11:18:36.950488 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 11:18:36.993607 2753662 cri.go:89] found id: "6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52"
	I0930 11:18:36.993645 2753662 cri.go:89] found id: "61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8"
	I0930 11:18:36.993651 2753662 cri.go:89] found id: ""
	I0930 11:18:36.993658 2753662 logs.go:276] 2 containers: [6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52 61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8]
	I0930 11:18:36.993715 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:36.997723 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:37.003856 2753662 logs.go:123] Gathering logs for kube-controller-manager [f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029] ...
	I0930 11:18:37.003886 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029"
	I0930 11:18:37.075219 2753662 logs.go:123] Gathering logs for kubelet ...
	I0930 11:18:37.075273 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 11:18:37.153524 2753662 logs.go:123] Gathering logs for kube-apiserver [d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351] ...
	I0930 11:18:37.153563 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351"
	I0930 11:18:37.210924 2753662 logs.go:123] Gathering logs for etcd [7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a] ...
	I0930 11:18:37.210961 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a"
	I0930 11:18:37.258361 2753662 logs.go:123] Gathering logs for coredns [9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63] ...
	I0930 11:18:37.258449 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63"
	I0930 11:18:37.310696 2753662 logs.go:123] Gathering logs for kube-scheduler [c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae] ...
	I0930 11:18:37.310794 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae"
	I0930 11:18:37.352514 2753662 logs.go:123] Gathering logs for kube-scheduler [0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5] ...
	I0930 11:18:37.352544 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5"
	I0930 11:18:37.400627 2753662 logs.go:123] Gathering logs for kube-proxy [130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75] ...
	I0930 11:18:37.400668 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75"
	I0930 11:18:37.455553 2753662 logs.go:123] Gathering logs for containerd ...
	I0930 11:18:37.455647 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0930 11:18:37.522678 2753662 logs.go:123] Gathering logs for container status ...
	I0930 11:18:37.522715 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 11:18:37.576590 2753662 logs.go:123] Gathering logs for dmesg ...
	I0930 11:18:37.576619 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 11:18:37.593392 2753662 logs.go:123] Gathering logs for kube-apiserver [b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0] ...
	I0930 11:18:37.593421 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0"
	I0930 11:18:37.644059 2753662 logs.go:123] Gathering logs for kube-controller-manager [08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67] ...
	I0930 11:18:37.644092 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67"
	I0930 11:18:37.718788 2753662 logs.go:123] Gathering logs for kindnet [103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017] ...
	I0930 11:18:37.718864 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017"
	I0930 11:18:37.763363 2753662 logs.go:123] Gathering logs for storage-provisioner [6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52] ...
	I0930 11:18:37.763455 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52"
	I0930 11:18:37.802992 2753662 logs.go:123] Gathering logs for storage-provisioner [61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8] ...
	I0930 11:18:37.803020 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8"
	I0930 11:18:37.848221 2753662 logs.go:123] Gathering logs for describe nodes ...
	I0930 11:18:37.848250 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 11:18:37.985596 2753662 logs.go:123] Gathering logs for coredns [30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec] ...
	I0930 11:18:37.985628 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec"
	I0930 11:18:38.034313 2753662 logs.go:123] Gathering logs for kube-proxy [7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21] ...
	I0930 11:18:38.034348 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21"
	I0930 11:18:38.086124 2753662 logs.go:123] Gathering logs for kindnet [809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396] ...
	I0930 11:18:38.086155 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396"
	I0930 11:18:38.126257 2753662 logs.go:123] Gathering logs for etcd [5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac] ...
	I0930 11:18:38.126295 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac"
	I0930 11:18:38.174515 2753662 logs.go:123] Gathering logs for kubernetes-dashboard [d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1] ...
	I0930 11:18:38.174547 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1"
	I0930 11:18:42.425404 2748394 pod_ready.go:103] pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace has status "Ready":"False"
	I0930 11:18:44.424651 2748394 pod_ready.go:82] duration metric: took 4m0.006611245s for pod "metrics-server-9975d5f86-m88nk" in "kube-system" namespace to be "Ready" ...
	E0930 11:18:44.424680 2748394 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0930 11:18:44.424690 2748394 pod_ready.go:39] duration metric: took 5m30.831059534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:18:44.424706 2748394 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:18:44.424735 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0930 11:18:44.424798 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 11:18:44.462050 2748394 cri.go:89] found id: "27928206b912b5caa53bfc5467de1638284a81a43e7262d3661bd5d8430a9d7f"
	I0930 11:18:44.462072 2748394 cri.go:89] found id: "2e0c3eafc3ba0696284889beac444aad70eb46423288ac9bd41aa4dd0ed4a245"
	I0930 11:18:44.462077 2748394 cri.go:89] found id: ""
	I0930 11:18:44.462085 2748394 logs.go:276] 2 containers: [27928206b912b5caa53bfc5467de1638284a81a43e7262d3661bd5d8430a9d7f 2e0c3eafc3ba0696284889beac444aad70eb46423288ac9bd41aa4dd0ed4a245]
	I0930 11:18:44.462139 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.465719 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.469526 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0930 11:18:44.469597 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 11:18:44.509789 2748394 cri.go:89] found id: "bf09b410e75f8b5b7a1af956c961ed5d411b85e54a06fdec3f471c80d8088e5b"
	I0930 11:18:44.509814 2748394 cri.go:89] found id: "ac879314a70238e9a3d188a20b16c633d0913c497744edf9f8fb4e81d4d8cffc"
	I0930 11:18:44.509820 2748394 cri.go:89] found id: ""
	I0930 11:18:44.509827 2748394 logs.go:276] 2 containers: [bf09b410e75f8b5b7a1af956c961ed5d411b85e54a06fdec3f471c80d8088e5b ac879314a70238e9a3d188a20b16c633d0913c497744edf9f8fb4e81d4d8cffc]
	I0930 11:18:44.509885 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.513303 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.516645 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0930 11:18:44.516757 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 11:18:44.555725 2748394 cri.go:89] found id: "71d296562fe24f66e1a29229021fba4fcf7228bfccea2b74202e1f3b5a5c5061"
	I0930 11:18:44.555749 2748394 cri.go:89] found id: "cd839508497e8be03f9a8159be49515fa9676c830305d9598c401cc752b5586e"
	I0930 11:18:44.555755 2748394 cri.go:89] found id: ""
	I0930 11:18:44.555765 2748394 logs.go:276] 2 containers: [71d296562fe24f66e1a29229021fba4fcf7228bfccea2b74202e1f3b5a5c5061 cd839508497e8be03f9a8159be49515fa9676c830305d9598c401cc752b5586e]
	I0930 11:18:44.555823 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.559329 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.562687 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0930 11:18:44.562761 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 11:18:44.602629 2748394 cri.go:89] found id: "5dad8b0ba377364fd2242c1fd0cce8f56d5513c2b51a7703f0468c415d9b95d9"
	I0930 11:18:44.602702 2748394 cri.go:89] found id: "4c39e9082952590d679ee58214b6a4fa416b2a839c581ae827b77ed10269e492"
	I0930 11:18:44.602723 2748394 cri.go:89] found id: ""
	I0930 11:18:44.602744 2748394 logs.go:276] 2 containers: [5dad8b0ba377364fd2242c1fd0cce8f56d5513c2b51a7703f0468c415d9b95d9 4c39e9082952590d679ee58214b6a4fa416b2a839c581ae827b77ed10269e492]
	I0930 11:18:44.602830 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.606549 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.609624 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0930 11:18:44.609703 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 11:18:44.646286 2748394 cri.go:89] found id: "c51e8741669e52df8a6fe07888e8c0e98e5233e0d659eef6e07e454291c68107"
	I0930 11:18:44.646318 2748394 cri.go:89] found id: "aac2d21475c261a888c6689fc91be4ffc292d1e0eab040ad68df7c73ae710f6f"
	I0930 11:18:44.646324 2748394 cri.go:89] found id: ""
	I0930 11:18:44.646331 2748394 logs.go:276] 2 containers: [c51e8741669e52df8a6fe07888e8c0e98e5233e0d659eef6e07e454291c68107 aac2d21475c261a888c6689fc91be4ffc292d1e0eab040ad68df7c73ae710f6f]
	I0930 11:18:44.646386 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.649926 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.659151 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 11:18:44.659225 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 11:18:44.707019 2748394 cri.go:89] found id: "7304773a3405a7475e221c939f968656688e4c992063f83831ace3ceda803c7c"
	I0930 11:18:44.707045 2748394 cri.go:89] found id: "db2335093572eb72f5de83a00411c45652f8ec375bb1ebdcfa6fae0d706b1e2a"
	I0930 11:18:44.707051 2748394 cri.go:89] found id: ""
	I0930 11:18:44.707058 2748394 logs.go:276] 2 containers: [7304773a3405a7475e221c939f968656688e4c992063f83831ace3ceda803c7c db2335093572eb72f5de83a00411c45652f8ec375bb1ebdcfa6fae0d706b1e2a]
	I0930 11:18:44.707118 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.710705 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.714275 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0930 11:18:44.714374 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 11:18:44.753025 2748394 cri.go:89] found id: "f5215cf9da519d5ed406523e41bc68a570586977204e40f983b753e6b12f62f1"
	I0930 11:18:44.753095 2748394 cri.go:89] found id: "37144e1d82fd46928b194266869ab94822cb7c307075051434a8baf0910be3a8"
	I0930 11:18:44.753115 2748394 cri.go:89] found id: ""
	I0930 11:18:44.753141 2748394 logs.go:276] 2 containers: [f5215cf9da519d5ed406523e41bc68a570586977204e40f983b753e6b12f62f1 37144e1d82fd46928b194266869ab94822cb7c307075051434a8baf0910be3a8]
	I0930 11:18:44.753219 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.756937 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.760578 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 11:18:44.760705 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 11:18:44.800958 2748394 cri.go:89] found id: "348449ef03663da76e752c4aa688bc8b80580838b44c841f72709b8cae477153"
	I0930 11:18:44.800997 2748394 cri.go:89] found id: ""
	I0930 11:18:44.801005 2748394 logs.go:276] 1 containers: [348449ef03663da76e752c4aa688bc8b80580838b44c841f72709b8cae477153]
	I0930 11:18:44.801064 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.804510 2748394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0930 11:18:44.804585 2748394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 11:18:44.850128 2748394 cri.go:89] found id: "8d88cb7d95363af60796acda1cf39e0daf68f34c71e9906563eed8aa171bda75"
	I0930 11:18:44.850153 2748394 cri.go:89] found id: "bd1018a19355d11d0c01dc9dde1d023a8af02b1ad94db1fb3d13c565b433d42e"
	I0930 11:18:44.850158 2748394 cri.go:89] found id: ""
	I0930 11:18:44.850165 2748394 logs.go:276] 2 containers: [8d88cb7d95363af60796acda1cf39e0daf68f34c71e9906563eed8aa171bda75 bd1018a19355d11d0c01dc9dde1d023a8af02b1ad94db1fb3d13c565b433d42e]
	I0930 11:18:44.850235 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.853883 2748394 ssh_runner.go:195] Run: which crictl
	I0930 11:18:44.857404 2748394 logs.go:123] Gathering logs for kube-controller-manager [7304773a3405a7475e221c939f968656688e4c992063f83831ace3ceda803c7c] ...
	I0930 11:18:44.857429 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7304773a3405a7475e221c939f968656688e4c992063f83831ace3ceda803c7c"
	I0930 11:18:44.915683 2748394 logs.go:123] Gathering logs for kube-controller-manager [db2335093572eb72f5de83a00411c45652f8ec375bb1ebdcfa6fae0d706b1e2a] ...
	I0930 11:18:44.915718 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db2335093572eb72f5de83a00411c45652f8ec375bb1ebdcfa6fae0d706b1e2a"
	I0930 11:18:40.714336 2753662 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0930 11:18:40.722066 2753662 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0930 11:18:40.722970 2753662 api_server.go:141] control plane version: v1.31.1
	I0930 11:18:40.722997 2753662 api_server.go:131] duration metric: took 4.216535727s to wait for apiserver health ...
	I0930 11:18:40.723013 2753662 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:18:40.723038 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0930 11:18:40.723102 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 11:18:40.762804 2753662 cri.go:89] found id: "d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351"
	I0930 11:18:40.762836 2753662 cri.go:89] found id: "b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0"
	I0930 11:18:40.762841 2753662 cri.go:89] found id: ""
	I0930 11:18:40.762849 2753662 logs.go:276] 2 containers: [d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351 b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0]
	I0930 11:18:40.762910 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:40.766597 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:40.770406 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0930 11:18:40.770493 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 11:18:40.811842 2753662 cri.go:89] found id: "5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac"
	I0930 11:18:40.811861 2753662 cri.go:89] found id: "7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a"
	I0930 11:18:40.811866 2753662 cri.go:89] found id: ""
	I0930 11:18:40.811874 2753662 logs.go:276] 2 containers: [5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac 7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a]
	I0930 11:18:40.811928 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:40.815993 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:40.825670 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0930 11:18:40.825741 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 11:18:40.866133 2753662 cri.go:89] found id: "9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63"
	I0930 11:18:40.866209 2753662 cri.go:89] found id: "30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec"
	I0930 11:18:40.866222 2753662 cri.go:89] found id: ""
	I0930 11:18:40.866232 2753662 logs.go:276] 2 containers: [9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63 30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec]
	I0930 11:18:40.866295 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:40.870441 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:40.873981 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0930 11:18:40.874093 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 11:18:40.914081 2753662 cri.go:89] found id: "c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae"
	I0930 11:18:40.914106 2753662 cri.go:89] found id: "0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5"
	I0930 11:18:40.914112 2753662 cri.go:89] found id: ""
	I0930 11:18:40.914130 2753662 logs.go:276] 2 containers: [c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae 0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5]
	I0930 11:18:40.914186 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:40.918541 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:40.923330 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0930 11:18:40.923406 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 11:18:40.969472 2753662 cri.go:89] found id: "130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75"
	I0930 11:18:40.969498 2753662 cri.go:89] found id: "7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21"
	I0930 11:18:40.969504 2753662 cri.go:89] found id: ""
	I0930 11:18:40.969511 2753662 logs.go:276] 2 containers: [130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75 7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21]
	I0930 11:18:40.969565 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:40.973477 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:40.976986 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 11:18:40.977082 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 11:18:41.028295 2753662 cri.go:89] found id: "f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029"
	I0930 11:18:41.028361 2753662 cri.go:89] found id: "08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67"
	I0930 11:18:41.028383 2753662 cri.go:89] found id: ""
	I0930 11:18:41.028410 2753662 logs.go:276] 2 containers: [f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029 08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67]
	I0930 11:18:41.028486 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:41.032329 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:41.035897 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0930 11:18:41.035969 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 11:18:41.076262 2753662 cri.go:89] found id: "103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017"
	I0930 11:18:41.076283 2753662 cri.go:89] found id: "809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396"
	I0930 11:18:41.076288 2753662 cri.go:89] found id: ""
	I0930 11:18:41.076296 2753662 logs.go:276] 2 containers: [103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017 809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396]
	I0930 11:18:41.076352 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:41.080147 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:41.083700 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0930 11:18:41.083805 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0930 11:18:41.121536 2753662 cri.go:89] found id: "6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52"
	I0930 11:18:41.121601 2753662 cri.go:89] found id: "61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8"
	I0930 11:18:41.121622 2753662 cri.go:89] found id: ""
	I0930 11:18:41.121637 2753662 logs.go:276] 2 containers: [6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52 61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8]
	I0930 11:18:41.121693 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:41.125506 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:41.128838 2753662 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0930 11:18:41.128948 2753662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0930 11:18:41.167765 2753662 cri.go:89] found id: "d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1"
	I0930 11:18:41.167791 2753662 cri.go:89] found id: ""
	I0930 11:18:41.167800 2753662 logs.go:276] 1 containers: [d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1]
	I0930 11:18:41.167860 2753662 ssh_runner.go:195] Run: which crictl
	I0930 11:18:41.171576 2753662 logs.go:123] Gathering logs for storage-provisioner [6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52] ...
	I0930 11:18:41.171644 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6586079973b0ff956c5c31fde1956ef873b38edaff7a45e386e8d13869f20e52"
	I0930 11:18:41.210927 2753662 logs.go:123] Gathering logs for storage-provisioner [61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8] ...
	I0930 11:18:41.210955 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61349e44b646746923bf1b49740bc02a2de30dd9968197550c17d32e17661fb8"
	I0930 11:18:41.253467 2753662 logs.go:123] Gathering logs for etcd [5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac] ...
	I0930 11:18:41.253543 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dfe7f955c23240a97b087dac74bf509c3b4789b8079a779915c9c6ca5a1caac"
	I0930 11:18:41.304498 2753662 logs.go:123] Gathering logs for coredns [30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec] ...
	I0930 11:18:41.304535 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30dbe4b2dcfcb1e137b8e1c3c664bb60b7e6f9c85800ca1b3b4bb8c350eb70ec"
	I0930 11:18:41.342223 2753662 logs.go:123] Gathering logs for kube-controller-manager [f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029] ...
	I0930 11:18:41.342253 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5f02f8d05041b43052391b2e9bb1cab096ae6e6e76755325edfb6be41a5f029"
	I0930 11:18:41.414862 2753662 logs.go:123] Gathering logs for kindnet [809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396] ...
	I0930 11:18:41.414896 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 809cbbc9ec2eff8d69b777b30464e86e414f8c250b3fa353ab814d0fea3c8396"
	I0930 11:18:41.453991 2753662 logs.go:123] Gathering logs for containerd ...
	I0930 11:18:41.454021 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0930 11:18:41.520542 2753662 logs.go:123] Gathering logs for kubelet ...
	I0930 11:18:41.520580 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 11:18:41.603427 2753662 logs.go:123] Gathering logs for dmesg ...
	I0930 11:18:41.603465 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 11:18:41.621278 2753662 logs.go:123] Gathering logs for kube-apiserver [b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0] ...
	I0930 11:18:41.621356 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2a9063c1e18ad04e532aae88ee7f5e6d787995c5d022190c6b4ff6a6e796ab0"
	I0930 11:18:41.706113 2753662 logs.go:123] Gathering logs for etcd [7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a] ...
	I0930 11:18:41.706151 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aac3bb7b731385eb96443519e5276c26d96946953e4a3292e749fa35c1f966a"
	I0930 11:18:41.753030 2753662 logs.go:123] Gathering logs for kube-controller-manager [08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67] ...
	I0930 11:18:41.753101 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08ba3800217c4c68cc697c6e6a55de2ba1e4733e403eff2f59fa49c6bb238d67"
	I0930 11:18:41.839993 2753662 logs.go:123] Gathering logs for kindnet [103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017] ...
	I0930 11:18:41.840030 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103111dbd7db04e6c080ffe84a9afd15fce2ef6197e88a89a830095459b6b017"
	I0930 11:18:41.879903 2753662 logs.go:123] Gathering logs for kubernetes-dashboard [d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1] ...
	I0930 11:18:41.879934 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e2cf8c841e60f2c8d22e016ce7aafc02c684d4ea6624ce04c3f5cdbb4fb1b1"
	I0930 11:18:41.917676 2753662 logs.go:123] Gathering logs for container status ...
	I0930 11:18:41.917704 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 11:18:41.967495 2753662 logs.go:123] Gathering logs for kube-scheduler [0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5] ...
	I0930 11:18:41.967526 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9a47767d90a60eee42ef36ff4c7e1ce5e18e316d8e780142dcc5e0ba4458f5"
	I0930 11:18:42.025431 2753662 logs.go:123] Gathering logs for kube-proxy [130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75] ...
	I0930 11:18:42.025522 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 130c7998eb5d902ec078acd5a5d4d31c919c6a1e071dee7e01ef5dc348e35b75"
	I0930 11:18:42.068533 2753662 logs.go:123] Gathering logs for kube-proxy [7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21] ...
	I0930 11:18:42.068567 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ef7c52097288b999b6b18c64524f8b5bd4bc0ffed0814bbb6e8f81301ea3e21"
	I0930 11:18:42.132581 2753662 logs.go:123] Gathering logs for describe nodes ...
	I0930 11:18:42.132676 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 11:18:42.322686 2753662 logs.go:123] Gathering logs for kube-apiserver [d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351] ...
	I0930 11:18:42.322725 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9b3984c369fb7f774fd8d43fdb28ebfde0258ab8745ffc5df429ec1ac22f351"
	I0930 11:18:42.397217 2753662 logs.go:123] Gathering logs for coredns [9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63] ...
	I0930 11:18:42.397254 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9931c26676a08232c887b09519f543f160427b5217bfbcf9523d01e1df625f63"
	I0930 11:18:42.450707 2753662 logs.go:123] Gathering logs for kube-scheduler [c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae] ...
	I0930 11:18:42.450735 2753662 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7ec78aa94f476394ccab62ac182f656d48aca1bc6bdb410ae2600fd27e25fae"
	I0930 11:18:45.006342 2753662 system_pods.go:59] 9 kube-system pods found
	I0930 11:18:45.006451 2753662 system_pods.go:61] "coredns-7c65d6cfc9-4bgp8" [c1eda3f0-982a-4d1f-9568-a0e5f47bdafe] Running
	I0930 11:18:45.006475 2753662 system_pods.go:61] "etcd-no-preload-935352" [860c8de9-232d-42dc-921d-a09355d5714c] Running
	I0930 11:18:45.006512 2753662 system_pods.go:61] "kindnet-djtlp" [fe6dd358-f6e8-4c8e-b685-9f0b30470643] Running
	I0930 11:18:45.006539 2753662 system_pods.go:61] "kube-apiserver-no-preload-935352" [f537e9ab-128b-4e40-86c2-22fbd76781a8] Running
	I0930 11:18:45.006562 2753662 system_pods.go:61] "kube-controller-manager-no-preload-935352" [90d4b89e-e60d-4db4-9e30-34828f2da8de] Running
	I0930 11:18:45.006600 2753662 system_pods.go:61] "kube-proxy-cjbdj" [8744e211-0d49-4f16-a390-650074abc461] Running
	I0930 11:18:45.006625 2753662 system_pods.go:61] "kube-scheduler-no-preload-935352" [ca9b6d3e-fe87-42a5-af4e-ddd2047ebcb5] Running
	I0930 11:18:45.006650 2753662 system_pods.go:61] "metrics-server-6867b74b74-qw45l" [827c89fe-012e-459e-a56d-0df205ac2b16] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 11:18:45.006688 2753662 system_pods.go:61] "storage-provisioner" [eea6c532-e4ef-427e-9586-b0a8ec82615a] Running
	I0930 11:18:45.006718 2753662 system_pods.go:74] duration metric: took 4.283695965s to wait for pod list to return data ...
	I0930 11:18:45.006741 2753662 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:18:45.014128 2753662 default_sa.go:45] found service account: "default"
	I0930 11:18:45.014166 2753662 default_sa.go:55] duration metric: took 7.153807ms for default service account to be created ...
	I0930 11:18:45.014180 2753662 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:18:45.024108 2753662 system_pods.go:86] 9 kube-system pods found
	I0930 11:18:45.024163 2753662 system_pods.go:89] "coredns-7c65d6cfc9-4bgp8" [c1eda3f0-982a-4d1f-9568-a0e5f47bdafe] Running
	I0930 11:18:45.024173 2753662 system_pods.go:89] "etcd-no-preload-935352" [860c8de9-232d-42dc-921d-a09355d5714c] Running
	I0930 11:18:45.024181 2753662 system_pods.go:89] "kindnet-djtlp" [fe6dd358-f6e8-4c8e-b685-9f0b30470643] Running
	I0930 11:18:45.024186 2753662 system_pods.go:89] "kube-apiserver-no-preload-935352" [f537e9ab-128b-4e40-86c2-22fbd76781a8] Running
	I0930 11:18:45.024194 2753662 system_pods.go:89] "kube-controller-manager-no-preload-935352" [90d4b89e-e60d-4db4-9e30-34828f2da8de] Running
	I0930 11:18:45.024199 2753662 system_pods.go:89] "kube-proxy-cjbdj" [8744e211-0d49-4f16-a390-650074abc461] Running
	I0930 11:18:45.024205 2753662 system_pods.go:89] "kube-scheduler-no-preload-935352" [ca9b6d3e-fe87-42a5-af4e-ddd2047ebcb5] Running
	I0930 11:18:45.024213 2753662 system_pods.go:89] "metrics-server-6867b74b74-qw45l" [827c89fe-012e-459e-a56d-0df205ac2b16] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 11:18:45.024218 2753662 system_pods.go:89] "storage-provisioner" [eea6c532-e4ef-427e-9586-b0a8ec82615a] Running
	I0930 11:18:45.024227 2753662 system_pods.go:126] duration metric: took 10.040876ms to wait for k8s-apps to be running ...
	I0930 11:18:45.024234 2753662 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:18:45.024303 2753662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:18:45.046301 2753662 system_svc.go:56] duration metric: took 22.055037ms WaitForService to wait for kubelet
	I0930 11:18:45.046340 2753662 kubeadm.go:582] duration metric: took 4m31.762863448s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:18:45.046363 2753662 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:18:45.051259 2753662 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0930 11:18:45.051302 2753662 node_conditions.go:123] node cpu capacity is 2
	I0930 11:18:45.051319 2753662 node_conditions.go:105] duration metric: took 4.947721ms to run NodePressure ...
	I0930 11:18:45.051333 2753662 start.go:241] waiting for startup goroutines ...
	I0930 11:18:45.051341 2753662 start.go:246] waiting for cluster config update ...
	I0930 11:18:45.051354 2753662 start.go:255] writing updated cluster config ...
	I0930 11:18:45.051828 2753662 ssh_runner.go:195] Run: rm -f paused
	I0930 11:18:45.181312 2753662 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 11:18:45.183726 2753662 out.go:177] * Done! kubectl is now configured to use "no-preload-935352" cluster and "default" namespace by default
	I0930 11:18:44.970537 2748394 logs.go:123] Gathering logs for kubernetes-dashboard [348449ef03663da76e752c4aa688bc8b80580838b44c841f72709b8cae477153] ...
	I0930 11:18:44.970574 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 348449ef03663da76e752c4aa688bc8b80580838b44c841f72709b8cae477153"
	I0930 11:18:45.075362 2748394 logs.go:123] Gathering logs for coredns [cd839508497e8be03f9a8159be49515fa9676c830305d9598c401cc752b5586e] ...
	I0930 11:18:45.075401 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd839508497e8be03f9a8159be49515fa9676c830305d9598c401cc752b5586e"
	I0930 11:18:45.174486 2748394 logs.go:123] Gathering logs for kube-scheduler [5dad8b0ba377364fd2242c1fd0cce8f56d5513c2b51a7703f0468c415d9b95d9] ...
	I0930 11:18:45.174524 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dad8b0ba377364fd2242c1fd0cce8f56d5513c2b51a7703f0468c415d9b95d9"
	I0930 11:18:45.317114 2748394 logs.go:123] Gathering logs for kube-proxy [c51e8741669e52df8a6fe07888e8c0e98e5233e0d659eef6e07e454291c68107] ...
	I0930 11:18:45.317143 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c51e8741669e52df8a6fe07888e8c0e98e5233e0d659eef6e07e454291c68107"
	I0930 11:18:45.409941 2748394 logs.go:123] Gathering logs for kube-proxy [aac2d21475c261a888c6689fc91be4ffc292d1e0eab040ad68df7c73ae710f6f] ...
	I0930 11:18:45.409967 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aac2d21475c261a888c6689fc91be4ffc292d1e0eab040ad68df7c73ae710f6f"
	I0930 11:18:45.482169 2748394 logs.go:123] Gathering logs for kindnet [f5215cf9da519d5ed406523e41bc68a570586977204e40f983b753e6b12f62f1] ...
	I0930 11:18:45.482209 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5215cf9da519d5ed406523e41bc68a570586977204e40f983b753e6b12f62f1"
	I0930 11:18:45.530754 2748394 logs.go:123] Gathering logs for kindnet [37144e1d82fd46928b194266869ab94822cb7c307075051434a8baf0910be3a8] ...
	I0930 11:18:45.530786 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37144e1d82fd46928b194266869ab94822cb7c307075051434a8baf0910be3a8"
	I0930 11:18:45.597379 2748394 logs.go:123] Gathering logs for containerd ...
	I0930 11:18:45.597405 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0930 11:18:45.680691 2748394 logs.go:123] Gathering logs for coredns [71d296562fe24f66e1a29229021fba4fcf7228bfccea2b74202e1f3b5a5c5061] ...
	I0930 11:18:45.680728 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71d296562fe24f66e1a29229021fba4fcf7228bfccea2b74202e1f3b5a5c5061"
	I0930 11:18:45.733912 2748394 logs.go:123] Gathering logs for kube-apiserver [27928206b912b5caa53bfc5467de1638284a81a43e7262d3661bd5d8430a9d7f] ...
	I0930 11:18:45.733945 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27928206b912b5caa53bfc5467de1638284a81a43e7262d3661bd5d8430a9d7f"
	I0930 11:18:45.799948 2748394 logs.go:123] Gathering logs for kube-apiserver [2e0c3eafc3ba0696284889beac444aad70eb46423288ac9bd41aa4dd0ed4a245] ...
	I0930 11:18:45.799983 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e0c3eafc3ba0696284889beac444aad70eb46423288ac9bd41aa4dd0ed4a245"
	I0930 11:18:45.879945 2748394 logs.go:123] Gathering logs for kube-scheduler [4c39e9082952590d679ee58214b6a4fa416b2a839c581ae827b77ed10269e492] ...
	I0930 11:18:45.879980 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c39e9082952590d679ee58214b6a4fa416b2a839c581ae827b77ed10269e492"
	I0930 11:18:45.927890 2748394 logs.go:123] Gathering logs for storage-provisioner [8d88cb7d95363af60796acda1cf39e0daf68f34c71e9906563eed8aa171bda75] ...
	I0930 11:18:45.927923 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d88cb7d95363af60796acda1cf39e0daf68f34c71e9906563eed8aa171bda75"
	I0930 11:18:45.966720 2748394 logs.go:123] Gathering logs for storage-provisioner [bd1018a19355d11d0c01dc9dde1d023a8af02b1ad94db1fb3d13c565b433d42e] ...
	I0930 11:18:45.966747 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1018a19355d11d0c01dc9dde1d023a8af02b1ad94db1fb3d13c565b433d42e"
	I0930 11:18:46.022201 2748394 logs.go:123] Gathering logs for container status ...
	I0930 11:18:46.022232 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 11:18:46.079558 2748394 logs.go:123] Gathering logs for kubelet ...
	I0930 11:18:46.079588 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 11:18:46.139311 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.773445     659 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.139554 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.694924     659 reflector.go:138] object-"kube-system"/"coredns-token-qm98j": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-qm98j" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.139796 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.695012     659 reflector.go:138] object-"kube-system"/"metrics-server-token-rn6jg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-rn6jg" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.140021 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.773604     659 reflector.go:138] object-"kube-system"/"kindnet-token-zrm6d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-zrm6d" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.140246 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.773702     659 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.140469 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.773793     659 reflector.go:138] object-"kube-system"/"kube-proxy-token-2qnp8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2qnp8" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.140683 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.773857     659 reflector.go:138] object-"default"/"default-token-gr5lr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gr5lr" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.140916 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:13 old-k8s-version-852171 kubelet[659]: E0930 11:13:13.689457     659 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jrkb5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jrkb5" is forbidden: User "system:node:old-k8s-version-852171" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-852171' and this object
	W0930 11:18:46.150042 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:17 old-k8s-version-852171 kubelet[659]: E0930 11:13:17.838784     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0930 11:18:46.150236 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:17 old-k8s-version-852171 kubelet[659]: E0930 11:13:17.967882     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.153486 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:32 old-k8s-version-852171 kubelet[659]: E0930 11:13:32.371316     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0930 11:18:46.155273 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:40 old-k8s-version-852171 kubelet[659]: E0930 11:13:40.058681     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.155616 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:41 old-k8s-version-852171 kubelet[659]: E0930 11:13:41.064307     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.156152 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:45 old-k8s-version-852171 kubelet[659]: E0930 11:13:45.364824     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.156488 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:46 old-k8s-version-852171 kubelet[659]: E0930 11:13:46.682850     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.156934 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:48 old-k8s-version-852171 kubelet[659]: E0930 11:13:48.084485     659 pod_workers.go:191] Error syncing pod c6c21cee-2a02-43eb-b0b3-2097030726c9 ("storage-provisioner_kube-system(c6c21cee-2a02-43eb-b0b3-2097030726c9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c6c21cee-2a02-43eb-b0b3-2097030726c9)"
	W0930 11:18:46.157876 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:59 old-k8s-version-852171 kubelet[659]: E0930 11:13:59.117882     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.160390 2748394 logs.go:138] Found kubelet problem: Sep 30 11:13:59 old-k8s-version-852171 kubelet[659]: E0930 11:13:59.380652     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0930 11:18:46.160861 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:06 old-k8s-version-852171 kubelet[659]: E0930 11:14:06.682813     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.161050 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:11 old-k8s-version-852171 kubelet[659]: E0930 11:14:11.376076     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.161383 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:18 old-k8s-version-852171 kubelet[659]: E0930 11:14:18.362371     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.161597 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:26 old-k8s-version-852171 kubelet[659]: E0930 11:14:26.362999     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.162196 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:32 old-k8s-version-852171 kubelet[659]: E0930 11:14:32.235186     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.162528 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:36 old-k8s-version-852171 kubelet[659]: E0930 11:14:36.682863     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.164964 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:40 old-k8s-version-852171 kubelet[659]: E0930 11:14:40.370995     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0930 11:18:46.165292 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:47 old-k8s-version-852171 kubelet[659]: E0930 11:14:47.362791     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.165476 2748394 logs.go:138] Found kubelet problem: Sep 30 11:14:55 old-k8s-version-852171 kubelet[659]: E0930 11:14:55.363277     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.165805 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:00 old-k8s-version-852171 kubelet[659]: E0930 11:15:00.372648     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.165992 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:07 old-k8s-version-852171 kubelet[659]: E0930 11:15:07.363816     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.166583 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:13 old-k8s-version-852171 kubelet[659]: E0930 11:15:13.370084     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.166912 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:16 old-k8s-version-852171 kubelet[659]: E0930 11:15:16.682725     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.167098 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:18 old-k8s-version-852171 kubelet[659]: E0930 11:15:18.362719     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.167282 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:29 old-k8s-version-852171 kubelet[659]: E0930 11:15:29.362752     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.167637 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:31 old-k8s-version-852171 kubelet[659]: E0930 11:15:31.363303     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.167828 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:41 old-k8s-version-852171 kubelet[659]: E0930 11:15:41.363227     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.168157 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:43 old-k8s-version-852171 kubelet[659]: E0930 11:15:43.369318     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.168353 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:54 old-k8s-version-852171 kubelet[659]: E0930 11:15:54.362782     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.168681 2748394 logs.go:138] Found kubelet problem: Sep 30 11:15:57 old-k8s-version-852171 kubelet[659]: E0930 11:15:57.362734     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.171117 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:07 old-k8s-version-852171 kubelet[659]: E0930 11:16:07.371795     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0930 11:18:46.171445 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:11 old-k8s-version-852171 kubelet[659]: E0930 11:16:11.363023     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.171634 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:19 old-k8s-version-852171 kubelet[659]: E0930 11:16:19.364941     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.171960 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:22 old-k8s-version-852171 kubelet[659]: E0930 11:16:22.362331     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.172145 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:32 old-k8s-version-852171 kubelet[659]: E0930 11:16:32.376208     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.172734 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:35 old-k8s-version-852171 kubelet[659]: E0930 11:16:35.590451     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.173060 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:36 old-k8s-version-852171 kubelet[659]: E0930 11:16:36.682806     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.173244 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:46 old-k8s-version-852171 kubelet[659]: E0930 11:16:46.362767     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.173579 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:49 old-k8s-version-852171 kubelet[659]: E0930 11:16:49.362912     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.173765 2748394 logs.go:138] Found kubelet problem: Sep 30 11:16:57 old-k8s-version-852171 kubelet[659]: E0930 11:16:57.366810     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.174094 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:02 old-k8s-version-852171 kubelet[659]: E0930 11:17:02.362440     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.174278 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:10 old-k8s-version-852171 kubelet[659]: E0930 11:17:10.363234     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.174604 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:16 old-k8s-version-852171 kubelet[659]: E0930 11:17:16.362318     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.174790 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:22 old-k8s-version-852171 kubelet[659]: E0930 11:17:22.362662     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.175116 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:27 old-k8s-version-852171 kubelet[659]: E0930 11:17:27.362898     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.175298 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:33 old-k8s-version-852171 kubelet[659]: E0930 11:17:33.362859     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.175629 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:40 old-k8s-version-852171 kubelet[659]: E0930 11:17:40.362306     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.175813 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:47 old-k8s-version-852171 kubelet[659]: E0930 11:17:47.362810     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.176140 2748394 logs.go:138] Found kubelet problem: Sep 30 11:17:54 old-k8s-version-852171 kubelet[659]: E0930 11:17:54.362341     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.176323 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:00 old-k8s-version-852171 kubelet[659]: E0930 11:18:00.373033     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.176648 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:09 old-k8s-version-852171 kubelet[659]: E0930 11:18:09.364507     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.176834 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:14 old-k8s-version-852171 kubelet[659]: E0930 11:18:14.362968     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.177164 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:21 old-k8s-version-852171 kubelet[659]: E0930 11:18:21.363375     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.177348 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:27 old-k8s-version-852171 kubelet[659]: E0930 11:18:27.362752     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.177673 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:32 old-k8s-version-852171 kubelet[659]: E0930 11:18:32.363035     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.177859 2748394 logs.go:138] Found kubelet problem: Sep 30 11:18:42 old-k8s-version-852171 kubelet[659]: E0930 11:18:42.363229     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0930 11:18:46.177869 2748394 logs.go:123] Gathering logs for describe nodes ...
	I0930 11:18:46.177891 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 11:18:46.326435 2748394 logs.go:123] Gathering logs for etcd [bf09b410e75f8b5b7a1af956c961ed5d411b85e54a06fdec3f471c80d8088e5b] ...
	I0930 11:18:46.326467 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf09b410e75f8b5b7a1af956c961ed5d411b85e54a06fdec3f471c80d8088e5b"
	I0930 11:18:46.367406 2748394 logs.go:123] Gathering logs for etcd [ac879314a70238e9a3d188a20b16c633d0913c497744edf9f8fb4e81d4d8cffc] ...
	I0930 11:18:46.367438 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac879314a70238e9a3d188a20b16c633d0913c497744edf9f8fb4e81d4d8cffc"
	I0930 11:18:46.412274 2748394 logs.go:123] Gathering logs for dmesg ...
	I0930 11:18:46.412304 2748394 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 11:18:46.428905 2748394 out.go:358] Setting ErrFile to fd 2...
	I0930 11:18:46.428929 2748394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0930 11:18:46.428976 2748394 out.go:270] X Problems detected in kubelet:
	W0930 11:18:46.428995 2748394 out.go:270]   Sep 30 11:18:14 old-k8s-version-852171 kubelet[659]: E0930 11:18:14.362968     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.429003 2748394 out.go:270]   Sep 30 11:18:21 old-k8s-version-852171 kubelet[659]: E0930 11:18:21.363375     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.429010 2748394 out.go:270]   Sep 30 11:18:27 old-k8s-version-852171 kubelet[659]: E0930 11:18:27.362752     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0930 11:18:46.429017 2748394 out.go:270]   Sep 30 11:18:32 old-k8s-version-852171 kubelet[659]: E0930 11:18:32.363035     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	W0930 11:18:46.429032 2748394 out.go:270]   Sep 30 11:18:42 old-k8s-version-852171 kubelet[659]: E0930 11:18:42.363229     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0930 11:18:46.429038 2748394 out.go:358] Setting ErrFile to fd 2...
	I0930 11:18:46.429045 2748394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:18:56.430176 2748394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:18:56.441626 2748394 api_server.go:72] duration metric: took 6m3.002247707s to wait for apiserver process to appear ...
	I0930 11:18:56.441651 2748394 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:18:56.443811 2748394 out.go:201] 
	W0930 11:18:56.445728 2748394 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0930 11:18:56.445746 2748394 out.go:270] * 
	W0930 11:18:56.446688 2748394 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 11:18:56.449054 2748394 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	fc8be8a88bedc       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   249d937196cc7       dashboard-metrics-scraper-8d5bb5db8-z8j5q
	8d88cb7d95363       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   40da99dfb8c5a       storage-provisioner
	348449ef03663       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   6b5bedbc79505       kubernetes-dashboard-cd95d586-d7mlj
	c51e8741669e5       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   6626c855e7b40       kube-proxy-kxvn5
	71d296562fe24       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   6daab8202968e       coredns-74ff55c5b-h5pdm
	3b8f51dd2e463       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   c05cbdbb40b25       busybox
	f5215cf9da519       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   c4310f782395f       kindnet-55hbq
	bd1018a19355d       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   40da99dfb8c5a       storage-provisioner
	bf09b410e75f8       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   18db4d5160980       etcd-old-k8s-version-852171
	5dad8b0ba3773       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   44cd5a3d4feea       kube-scheduler-old-k8s-version-852171
	7304773a3405a       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   8bead4d346c95       kube-controller-manager-old-k8s-version-852171
	27928206b912b       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   5889f77df7001       kube-apiserver-old-k8s-version-852171
	43d05ec65d176       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   0c90ccee6d291       busybox
	cd839508497e8       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   518e9022bb8b9       coredns-74ff55c5b-h5pdm
	37144e1d82fd4       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   a79dd56898cc5       kindnet-55hbq
	aac2d21475c26       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   429396bd3dfbd       kube-proxy-kxvn5
	db2335093572e       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   9b13ab2107ccf       kube-controller-manager-old-k8s-version-852171
	2e0c3eafc3ba0       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   a4c15261b31db       kube-apiserver-old-k8s-version-852171
	4c39e90829525       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   853dc452a2b1b       kube-scheduler-old-k8s-version-852171
	ac879314a7023       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   60a2395a44195       etcd-old-k8s-version-852171
	
	
	==> containerd <==
	Sep 30 11:15:12 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:15:12.393315947Z" level=info msg="CreateContainer within sandbox \"249d937196cc7dcd765be99c12adfcf2f4cd337a454f2b1f234f9a29336e4d3f\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"190db5fadd90bbb9472af39b98bfebdca93d6178c30e1c14e44e0ce7ea2be831\""
	Sep 30 11:15:12 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:15:12.395478334Z" level=info msg="StartContainer for \"190db5fadd90bbb9472af39b98bfebdca93d6178c30e1c14e44e0ce7ea2be831\""
	Sep 30 11:15:12 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:15:12.480959144Z" level=info msg="StartContainer for \"190db5fadd90bbb9472af39b98bfebdca93d6178c30e1c14e44e0ce7ea2be831\" returns successfully"
	Sep 30 11:15:12 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:15:12.505237298Z" level=info msg="shim disconnected" id=190db5fadd90bbb9472af39b98bfebdca93d6178c30e1c14e44e0ce7ea2be831 namespace=k8s.io
	Sep 30 11:15:12 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:15:12.505311505Z" level=warning msg="cleaning up after shim disconnected" id=190db5fadd90bbb9472af39b98bfebdca93d6178c30e1c14e44e0ce7ea2be831 namespace=k8s.io
	Sep 30 11:15:12 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:15:12.505324166Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 30 11:15:13 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:15:13.372365632Z" level=info msg="RemoveContainer for \"9f298d51e3ae699606fbf866ad6dd2ce7dc1371bb58a82d4ffa9ecb12bb76757\""
	Sep 30 11:15:13 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:15:13.380119775Z" level=info msg="RemoveContainer for \"9f298d51e3ae699606fbf866ad6dd2ce7dc1371bb58a82d4ffa9ecb12bb76757\" returns successfully"
	Sep 30 11:16:07 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:07.363789544Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 30 11:16:07 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:07.369015435Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 30 11:16:07 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:07.370875004Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 30 11:16:07 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:07.370957384Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 30 11:16:35 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:35.368091850Z" level=info msg="CreateContainer within sandbox \"249d937196cc7dcd765be99c12adfcf2f4cd337a454f2b1f234f9a29336e4d3f\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 30 11:16:35 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:35.385515293Z" level=info msg="CreateContainer within sandbox \"249d937196cc7dcd765be99c12adfcf2f4cd337a454f2b1f234f9a29336e4d3f\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a\""
	Sep 30 11:16:35 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:35.386605685Z" level=info msg="StartContainer for \"fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a\""
	Sep 30 11:16:35 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:35.485474921Z" level=info msg="StartContainer for \"fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a\" returns successfully"
	Sep 30 11:16:35 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:35.509611929Z" level=info msg="shim disconnected" id=fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a namespace=k8s.io
	Sep 30 11:16:35 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:35.509678825Z" level=warning msg="cleaning up after shim disconnected" id=fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a namespace=k8s.io
	Sep 30 11:16:35 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:35.509690050Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 30 11:16:35 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:35.592648255Z" level=info msg="RemoveContainer for \"190db5fadd90bbb9472af39b98bfebdca93d6178c30e1c14e44e0ce7ea2be831\""
	Sep 30 11:16:35 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:16:35.600519886Z" level=info msg="RemoveContainer for \"190db5fadd90bbb9472af39b98bfebdca93d6178c30e1c14e44e0ce7ea2be831\" returns successfully"
	Sep 30 11:18:54 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:18:54.363707441Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 30 11:18:54 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:18:54.370532610Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 30 11:18:54 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:18:54.371940998Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 30 11:18:54 old-k8s-version-852171 containerd[566]: time="2024-09-30T11:18:54.372034561Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [71d296562fe24f66e1a29229021fba4fcf7228bfccea2b74202e1f3b5a5c5061] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:52743 - 16288 "HINFO IN 4286763461019074840.7437226712287819785. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031265822s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0930 11:13:47.692534       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-30 11:13:17.691993245 +0000 UTC m=+0.066401333) (total time: 30.000445086s):
	Trace[2019727887]: [30.000445086s] [30.000445086s] END
	E0930 11:13:47.692568       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0930 11:13:47.692731       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-30 11:13:17.692088531 +0000 UTC m=+0.066496627) (total time: 30.000630923s):
	Trace[939984059]: [30.000630923s] [30.000630923s] END
	E0930 11:13:47.692743       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0930 11:13:47.692802       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-30 11:13:17.692009056 +0000 UTC m=+0.066417152) (total time: 30.000784194s):
	Trace[911902081]: [30.000784194s] [30.000784194s] END
	E0930 11:13:47.692806       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [cd839508497e8be03f9a8159be49515fa9676c830305d9598c401cc752b5586e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:42727 - 40535 "HINFO IN 8731471170453869645.6005400066448653198. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023982114s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-852171
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-852171
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=old-k8s-version-852171
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T11_10_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:10:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-852171
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:18:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:14:04 +0000   Mon, 30 Sep 2024 11:10:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:14:04 +0000   Mon, 30 Sep 2024 11:10:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:14:04 +0000   Mon, 30 Sep 2024 11:10:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:14:04 +0000   Mon, 30 Sep 2024 11:10:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-852171
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c7a573643a74cb3818c8a9036270499
	  System UUID:                33760e1a-e309-45aa-af03-a54f962a44c9
	  Boot ID:                    65cfb3b2-92d4-49d4-b46a-56cf6adc9d81
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 coredns-74ff55c5b-h5pdm                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m4s
	  kube-system                 etcd-old-k8s-version-852171                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m11s
	  kube-system                 kindnet-55hbq                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m4s
	  kube-system                 kube-apiserver-old-k8s-version-852171             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-controller-manager-old-k8s-version-852171    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-proxy-kxvn5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  kube-system                 kube-scheduler-old-k8s-version-852171             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 metrics-server-9975d5f86-m88nk                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m28s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-z8j5q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-d7mlj               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m11s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m11s                  kubelet     Node old-k8s-version-852171 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m11s                  kubelet     Node old-k8s-version-852171 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m11s                  kubelet     Node old-k8s-version-852171 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m11s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m4s                   kubelet     Node old-k8s-version-852171 status is now: NodeReady
	  Normal  Starting                 8m2s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m57s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m57s (x8 over 5m57s)  kubelet     Node old-k8s-version-852171 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s (x8 over 5m57s)  kubelet     Node old-k8s-version-852171 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x7 over 5m57s)  kubelet     Node old-k8s-version-852171 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m40s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [ac879314a70238e9a3d188a20b16c633d0913c497744edf9f8fb4e81d4d8cffc] <==
	raft2024/09/30 11:10:29 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/09/30 11:10:29 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/09/30 11:10:29 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/09/30 11:10:29 INFO: ea7e25599daad906 became leader at term 2
	raft2024/09/30 11:10:29 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-09-30 11:10:29.672531 I | etcdserver: published {Name:old-k8s-version-852171 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-09-30 11:10:29.675653 I | embed: ready to serve client requests
	2024-09-30 11:10:29.677162 I | embed: serving client requests on 192.168.76.2:2379
	2024-09-30 11:10:29.686174 I | embed: ready to serve client requests
	2024-09-30 11:10:29.723666 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-30 11:10:29.725731 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-30 11:10:29.729268 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-30 11:10:29.781821 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-30 11:10:50.447414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:10:50.506703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:11:00.502489 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:11:10.502080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:11:20.502060 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:11:30.501877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:11:40.502060 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:11:50.501997 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:12:00.502537 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:12:10.502772 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:12:20.502148 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:12:30.516161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [bf09b410e75f8b5b7a1af956c961ed5d411b85e54a06fdec3f471c80d8088e5b] <==
	2024-09-30 11:14:51.255103 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:15:01.257822 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:15:11.261688 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:15:21.255224 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:15:31.255092 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:15:41.257174 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:15:51.259730 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:16:01.260740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:16:11.255223 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:16:21.255186 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:16:31.255226 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:16:41.255097 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:16:51.255188 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:17:01.255238 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:17:11.255646 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:17:21.255084 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:17:31.255058 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:17:41.255109 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:17:51.255134 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:18:01.255277 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:18:11.255174 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:18:21.254949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:18:31.255092 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:18:41.255015 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-30 11:18:51.255657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:18:58 up 1 day, 19:01,  0 users,  load average: 0.94, 1.83, 2.67
	Linux old-k8s-version-852171 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [37144e1d82fd46928b194266869ab94822cb7c307075051434a8baf0910be3a8] <==
	I0930 11:10:57.922316       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0930 11:10:58.332731       1 controller.go:334] Starting controller kube-network-policies
	I0930 11:10:58.332817       1 controller.go:338] Waiting for informer caches to sync
	I0930 11:10:58.332856       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0930 11:10:58.433947       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0930 11:10:58.433986       1 metrics.go:61] Registering metrics
	I0930 11:10:58.434081       1 controller.go:374] Syncing nftables rules
	I0930 11:11:08.337091       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:11:08.337150       1 main.go:299] handling current node
	I0930 11:11:18.332859       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:11:18.332892       1 main.go:299] handling current node
	I0930 11:11:28.341953       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:11:28.341989       1 main.go:299] handling current node
	I0930 11:11:38.339683       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:11:38.339719       1 main.go:299] handling current node
	I0930 11:11:48.332572       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:11:48.332606       1 main.go:299] handling current node
	I0930 11:11:58.333357       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:11:58.333396       1 main.go:299] handling current node
	I0930 11:12:08.332757       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:12:08.332789       1 main.go:299] handling current node
	I0930 11:12:18.341815       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:12:18.341853       1 main.go:299] handling current node
	I0930 11:12:28.337487       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:12:28.337530       1 main.go:299] handling current node
	
	
	==> kindnet [f5215cf9da519d5ed406523e41bc68a570586977204e40f983b753e6b12f62f1] <==
	I0930 11:16:57.847787       1 main.go:299] handling current node
	I0930 11:17:07.848652       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:17:07.848694       1 main.go:299] handling current node
	I0930 11:17:17.841241       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:17:17.841277       1 main.go:299] handling current node
	I0930 11:17:27.847897       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:17:27.847995       1 main.go:299] handling current node
	I0930 11:17:37.848041       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:17:37.848081       1 main.go:299] handling current node
	I0930 11:17:47.849028       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:17:47.849064       1 main.go:299] handling current node
	I0930 11:17:57.846411       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:17:57.846456       1 main.go:299] handling current node
	I0930 11:18:07.849239       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:18:07.849271       1 main.go:299] handling current node
	I0930 11:18:17.840996       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:18:17.841032       1 main.go:299] handling current node
	I0930 11:18:27.847105       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:18:27.847141       1 main.go:299] handling current node
	I0930 11:18:37.849136       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:18:37.849186       1 main.go:299] handling current node
	I0930 11:18:47.849791       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:18:47.849825       1 main.go:299] handling current node
	I0930 11:18:57.846925       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0930 11:18:57.846968       1 main.go:299] handling current node
	
	
	==> kube-apiserver [27928206b912b5caa53bfc5467de1638284a81a43e7262d3661bd5d8430a9d7f] <==
	I0930 11:15:42.270243       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0930 11:15:42.270464       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0930 11:16:14.171804       1 client.go:360] parsed scheme: "passthrough"
	I0930 11:16:14.171850       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0930 11:16:14.171885       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0930 11:16:17.754854       1 handler_proxy.go:102] no RequestInfo found in the context
	E0930 11:16:17.754930       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0930 11:16:17.754938       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 11:16:46.511981       1 client.go:360] parsed scheme: "passthrough"
	I0930 11:16:46.512036       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0930 11:16:46.512052       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0930 11:17:23.454807       1 client.go:360] parsed scheme: "passthrough"
	I0930 11:17:23.454849       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0930 11:17:23.454857       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0930 11:18:07.525041       1 client.go:360] parsed scheme: "passthrough"
	I0930 11:18:07.525087       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0930 11:18:07.525097       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0930 11:18:14.678284       1 handler_proxy.go:102] no RequestInfo found in the context
	E0930 11:18:14.678479       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0930 11:18:14.678497       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0930 11:18:42.072801       1 client.go:360] parsed scheme: "passthrough"
	I0930 11:18:42.072928       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0930 11:18:42.072983       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [2e0c3eafc3ba0696284889beac444aad70eb46423288ac9bd41aa4dd0ed4a245] <==
	I0930 11:10:37.098846       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0930 11:10:37.099095       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0930 11:10:37.120175       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0930 11:10:37.128064       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0930 11:10:37.128091       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0930 11:10:37.564896       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 11:10:37.606113       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0930 11:10:37.683803       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0930 11:10:37.684884       1 controller.go:606] quota admission added evaluator for: endpoints
	I0930 11:10:37.691681       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 11:10:38.154691       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 11:10:38.787670       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0930 11:10:39.089510       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0930 11:10:39.157662       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0930 11:10:54.802046       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0930 11:10:54.876968       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0930 11:11:04.697263       1 client.go:360] parsed scheme: "passthrough"
	I0930 11:11:04.697306       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0930 11:11:04.697317       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0930 11:11:39.929051       1 client.go:360] parsed scheme: "passthrough"
	I0930 11:11:39.929095       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0930 11:11:39.929104       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0930 11:12:17.601128       1 client.go:360] parsed scheme: "passthrough"
	I0930 11:12:17.601173       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0930 11:12:17.601182       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [7304773a3405a7475e221c939f968656688e4c992063f83831ace3ceda803c7c] <==
	I0930 11:14:38.531850       1 request.go:655] Throttling request took 1.048390505s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0930 11:14:39.383325       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0930 11:15:05.425193       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0930 11:15:11.033745       1 request.go:655] Throttling request took 1.048478665s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W0930 11:15:11.885208       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0930 11:15:35.927545       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0930 11:15:43.535841       1 request.go:655] Throttling request took 1.048353077s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0930 11:15:44.386982       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0930 11:16:06.429207       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0930 11:16:16.037606       1 request.go:655] Throttling request took 1.048416289s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0930 11:16:16.889030       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0930 11:16:36.930989       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0930 11:16:48.539506       1 request.go:655] Throttling request took 1.042252034s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0930 11:16:49.390947       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0930 11:17:07.432991       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0930 11:17:21.041383       1 request.go:655] Throttling request took 1.048415927s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0930 11:17:21.893003       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0930 11:17:37.937348       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0930 11:17:53.543549       1 request.go:655] Throttling request took 1.048346108s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0930 11:17:54.395019       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0930 11:18:08.437495       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0930 11:18:26.044829       1 request.go:655] Throttling request took 1.048259749s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0930 11:18:26.896513       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0930 11:18:38.939497       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0930 11:18:58.546954       1 request.go:655] Throttling request took 1.048465222s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	
	
	==> kube-controller-manager [db2335093572eb72f5de83a00411c45652f8ec375bb1ebdcfa6fae0d706b1e2a] <==
	I0930 11:10:54.773528       1 event.go:291] "Event occurred" object="old-k8s-version-852171" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-852171 event: Registered Node old-k8s-version-852171 in Controller"
	I0930 11:10:54.773754       1 shared_informer.go:247] Caches are synced for attach detach 
	I0930 11:10:54.774362       1 shared_informer.go:247] Caches are synced for job 
	I0930 11:10:54.790630       1 shared_informer.go:247] Caches are synced for endpoint 
	E0930 11:10:54.803870       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I0930 11:10:54.810164       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0930 11:10:54.849958       1 shared_informer.go:247] Caches are synced for deployment 
	I0930 11:10:54.870113       1 shared_informer.go:247] Caches are synced for disruption 
	I0930 11:10:54.870217       1 disruption.go:339] Sending events to api server.
	I0930 11:10:54.891130       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-55hbq"
	I0930 11:10:54.908968       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0930 11:10:54.966406       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kxvn5"
	I0930 11:10:54.988297       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-h5pdm"
	I0930 11:10:55.029833       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-g2bst"
	I0930 11:10:55.044622       1 shared_informer.go:247] Caches are synced for resource quota 
	I0930 11:10:55.061151       1 shared_informer.go:247] Caches are synced for resource quota 
	I0930 11:10:55.174043       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0930 11:10:55.382357       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0930 11:10:55.459542       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0930 11:10:55.459580       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0930 11:10:56.758619       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0930 11:10:56.779138       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-g2bst"
	I0930 11:10:59.773129       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0930 11:12:29.387352       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0930 11:12:30.445033       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-m88nk"
	
	
	==> kube-proxy [aac2d21475c261a888c6689fc91be4ffc292d1e0eab040ad68df7c73ae710f6f] <==
	I0930 11:10:56.005418       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0930 11:10:56.005635       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0930 11:10:56.033267       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0930 11:10:56.033370       1 server_others.go:185] Using iptables Proxier.
	I0930 11:10:56.033573       1 server.go:650] Version: v1.20.0
	I0930 11:10:56.034163       1 config.go:315] Starting service config controller
	I0930 11:10:56.034173       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0930 11:10:56.034886       1 config.go:224] Starting endpoint slice config controller
	I0930 11:10:56.034894       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0930 11:10:56.134325       1 shared_informer.go:247] Caches are synced for service config 
	I0930 11:10:56.135666       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [c51e8741669e52df8a6fe07888e8c0e98e5233e0d659eef6e07e454291c68107] <==
	I0930 11:13:18.177447       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0930 11:13:18.177668       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0930 11:13:18.196236       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0930 11:13:18.196490       1 server_others.go:185] Using iptables Proxier.
	I0930 11:13:18.196815       1 server.go:650] Version: v1.20.0
	I0930 11:13:18.197417       1 config.go:315] Starting service config controller
	I0930 11:13:18.197435       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0930 11:13:18.197425       1 config.go:224] Starting endpoint slice config controller
	I0930 11:13:18.197650       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0930 11:13:18.297585       1 shared_informer.go:247] Caches are synced for service config 
	I0930 11:13:18.297874       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [4c39e9082952590d679ee58214b6a4fa416b2a839c581ae827b77ed10269e492] <==
	I0930 11:10:30.780916       1 serving.go:331] Generated self-signed cert in-memory
	W0930 11:10:36.284701       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 11:10:36.284960       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 11:10:36.285101       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 11:10:36.285196       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 11:10:36.427693       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0930 11:10:36.431687       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0930 11:10:36.431812       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 11:10:36.431831       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 11:10:36.462208       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 11:10:36.471577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 11:10:36.475964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0930 11:10:36.476046       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 11:10:36.476268       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 11:10:36.476394       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 11:10:36.476508       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 11:10:36.476594       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 11:10:36.476678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 11:10:36.476760       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 11:10:36.476871       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 11:10:36.473972       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0930 11:10:38.032003       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [5dad8b0ba377364fd2242c1fd0cce8f56d5513c2b51a7703f0468c415d9b95d9] <==
	I0930 11:13:07.983070       1 serving.go:331] Generated self-signed cert in-memory
	W0930 11:13:13.498990       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 11:13:13.499031       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 11:13:13.499040       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 11:13:13.499050       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 11:13:13.862608       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0930 11:13:13.874876       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 11:13:13.874907       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 11:13:13.874926       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0930 11:13:13.978917       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 30 11:17:27 old-k8s-version-852171 kubelet[659]: E0930 11:17:27.362898     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	Sep 30 11:17:33 old-k8s-version-852171 kubelet[659]: E0930 11:17:33.362859     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 30 11:17:40 old-k8s-version-852171 kubelet[659]: I0930 11:17:40.361912     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a
	Sep 30 11:17:40 old-k8s-version-852171 kubelet[659]: E0930 11:17:40.362306     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	Sep 30 11:17:47 old-k8s-version-852171 kubelet[659]: E0930 11:17:47.362810     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 30 11:17:54 old-k8s-version-852171 kubelet[659]: I0930 11:17:54.361954     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a
	Sep 30 11:17:54 old-k8s-version-852171 kubelet[659]: E0930 11:17:54.362341     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	Sep 30 11:18:00 old-k8s-version-852171 kubelet[659]: E0930 11:18:00.373033     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 30 11:18:09 old-k8s-version-852171 kubelet[659]: I0930 11:18:09.364122     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a
	Sep 30 11:18:09 old-k8s-version-852171 kubelet[659]: E0930 11:18:09.364507     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	Sep 30 11:18:14 old-k8s-version-852171 kubelet[659]: E0930 11:18:14.362968     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 30 11:18:21 old-k8s-version-852171 kubelet[659]: I0930 11:18:21.362542     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a
	Sep 30 11:18:21 old-k8s-version-852171 kubelet[659]: E0930 11:18:21.363375     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	Sep 30 11:18:27 old-k8s-version-852171 kubelet[659]: E0930 11:18:27.362752     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 30 11:18:32 old-k8s-version-852171 kubelet[659]: I0930 11:18:32.362240     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a
	Sep 30 11:18:32 old-k8s-version-852171 kubelet[659]: E0930 11:18:32.363035     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	Sep 30 11:18:42 old-k8s-version-852171 kubelet[659]: E0930 11:18:42.363229     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 30 11:18:46 old-k8s-version-852171 kubelet[659]: I0930 11:18:46.362001     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a
	Sep 30 11:18:46 old-k8s-version-852171 kubelet[659]: E0930 11:18:46.363383     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	Sep 30 11:18:54 old-k8s-version-852171 kubelet[659]: E0930 11:18:54.372188     659 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 30 11:18:54 old-k8s-version-852171 kubelet[659]: E0930 11:18:54.372239     659 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 30 11:18:54 old-k8s-version-852171 kubelet[659]: E0930 11:18:54.372371     659 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-rn6jg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-m88nk_kube-system(c1e5eaa
b-3082-4bbf-aa5e-c3d2046ca875): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 30 11:18:54 old-k8s-version-852171 kubelet[659]: E0930 11:18:54.372404     659 pod_workers.go:191] Error syncing pod c1e5eaab-3082-4bbf-aa5e-c3d2046ca875 ("metrics-server-9975d5f86-m88nk_kube-system(c1e5eaab-3082-4bbf-aa5e-c3d2046ca875)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 30 11:18:58 old-k8s-version-852171 kubelet[659]: I0930 11:18:58.361953     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: fc8be8a88bedca2d8c29d323fac379ff8582fe2ca63cbd60cb2d0b2a501e0a0a
	Sep 30 11:18:58 old-k8s-version-852171 kubelet[659]: E0930 11:18:58.366167     659 pod_workers.go:191] Error syncing pod 1b424440-feed-4515-ab06-1253c7b96fde ("dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z8j5q_kubernetes-dashboard(1b424440-feed-4515-ab06-1253c7b96fde)"
	
	
	==> kubernetes-dashboard [348449ef03663da76e752c4aa688bc8b80580838b44c841f72709b8cae477153] <==
	2024/09/30 11:13:42 Starting overwatch
	2024/09/30 11:13:42 Using namespace: kubernetes-dashboard
	2024/09/30 11:13:42 Using in-cluster config to connect to apiserver
	2024/09/30 11:13:42 Using secret token for csrf signing
	2024/09/30 11:13:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/30 11:13:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/30 11:13:43 Successful initial request to the apiserver, version: v1.20.0
	2024/09/30 11:13:43 Generating JWE encryption key
	2024/09/30 11:13:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/30 11:13:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/30 11:13:43 Initializing JWE encryption key from synchronized object
	2024/09/30 11:13:43 Creating in-cluster Sidecar client
	2024/09/30 11:13:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/30 11:13:43 Serving insecurely on HTTP port: 9090
	2024/09/30 11:14:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/30 11:14:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/30 11:15:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/30 11:15:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/30 11:16:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/30 11:16:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/30 11:17:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/30 11:17:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/30 11:18:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/30 11:18:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8d88cb7d95363af60796acda1cf39e0daf68f34c71e9906563eed8aa171bda75] <==
	I0930 11:14:02.473685       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 11:14:02.494155       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 11:14:02.494332       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 11:14:19.984751       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 11:14:19.991302       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-852171_33c089b8-3caf-4470-ae9a-4eeaf44e3252!
	I0930 11:14:20.002233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e18c2136-bb93-48d0-b5ef-de2d021aaddc", APIVersion:"v1", ResourceVersion:"837", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-852171_33c089b8-3caf-4470-ae9a-4eeaf44e3252 became leader
	I0930 11:14:20.100013       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-852171_33c089b8-3caf-4470-ae9a-4eeaf44e3252!
	
	
	==> storage-provisioner [bd1018a19355d11d0c01dc9dde1d023a8af02b1ad94db1fb3d13c565b433d42e] <==
	I0930 11:13:17.245659       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0930 11:13:47.251475       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-852171 -n old-k8s-version-852171
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-852171 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-m88nk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-852171 describe pod metrics-server-9975d5f86-m88nk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-852171 describe pod metrics-server-9975d5f86-m88nk: exit status 1 (149.07451ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-m88nk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-852171 describe pod metrics-server-9975d5f86-m88nk: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (376.06s)

                                                
                                    

Test pass (298/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.92
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 4.63
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.09
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 217.44
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 16.48
34 TestAddons/parallel/Ingress 18.7
35 TestAddons/parallel/InspektorGadget 11.26
36 TestAddons/parallel/MetricsServer 6.83
38 TestAddons/parallel/CSI 45.2
39 TestAddons/parallel/Headlamp 16.59
40 TestAddons/parallel/CloudSpanner 6.67
41 TestAddons/parallel/LocalPath 52.51
42 TestAddons/parallel/NvidiaDevicePlugin 6.74
43 TestAddons/parallel/Yakd 10.95
44 TestAddons/StoppedEnableDisable 12.26
45 TestCertOptions 37.59
46 TestCertExpiration 231.07
48 TestForceSystemdFlag 33.54
49 TestForceSystemdEnv 46.58
50 TestDockerEnvContainerd 45.36
55 TestErrorSpam/setup 30.04
56 TestErrorSpam/start 0.68
57 TestErrorSpam/status 0.95
58 TestErrorSpam/pause 2.2
59 TestErrorSpam/unpause 1.73
60 TestErrorSpam/stop 1.48
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 58.23
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 5.86
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.87
72 TestFunctional/serial/CacheCmd/cache/add_local 1.2
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.96
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.13
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 39.51
81 TestFunctional/serial/ComponentHealth 0.11
82 TestFunctional/serial/LogsCmd 1.74
83 TestFunctional/serial/LogsFileCmd 1.76
84 TestFunctional/serial/InvalidService 4.55
86 TestFunctional/parallel/ConfigCmd 0.48
87 TestFunctional/parallel/DashboardCmd 10.82
88 TestFunctional/parallel/DryRun 0.4
89 TestFunctional/parallel/InternationalLanguage 0.21
90 TestFunctional/parallel/StatusCmd 1.16
94 TestFunctional/parallel/ServiceCmdConnect 10.67
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 25.66
98 TestFunctional/parallel/SSHCmd 0.66
99 TestFunctional/parallel/CpCmd 2.43
101 TestFunctional/parallel/FileSync 0.33
102 TestFunctional/parallel/CertSync 2.12
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
110 TestFunctional/parallel/License 0.28
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.47
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.28
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
124 TestFunctional/parallel/ProfileCmd/profile_list 0.47
125 TestFunctional/parallel/ServiceCmd/List 0.57
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
128 TestFunctional/parallel/MountCmd/any-port 8.33
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
130 TestFunctional/parallel/ServiceCmd/Format 0.42
131 TestFunctional/parallel/ServiceCmd/URL 0.48
132 TestFunctional/parallel/MountCmd/specific-port 2.02
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.72
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.19
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.92
141 TestFunctional/parallel/ImageCommands/Setup 0.62
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.39
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.49
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 133.72
159 TestMultiControlPlane/serial/DeployApp 33.19
160 TestMultiControlPlane/serial/PingHostFromPods 1.57
161 TestMultiControlPlane/serial/AddWorkerNode 21.67
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
164 TestMultiControlPlane/serial/CopyFile 18.52
165 TestMultiControlPlane/serial/StopSecondaryNode 12.82
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
167 TestMultiControlPlane/serial/RestartSecondaryNode 19.18
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.22
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 140.49
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.86
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
172 TestMultiControlPlane/serial/StopCluster 36.1
173 TestMultiControlPlane/serial/RestartCluster 55.64
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
175 TestMultiControlPlane/serial/AddSecondaryNode 43.44
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.96
180 TestJSONOutput/start/Command 47.4
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.72
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.64
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.78
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.22
205 TestKicCustomNetwork/create_custom_network 37.07
206 TestKicCustomNetwork/use_default_bridge_network 33.45
207 TestKicExistingNetwork 30.37
208 TestKicCustomSubnet 35.98
209 TestKicStaticIP 33.02
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 70.02
214 TestMountStart/serial/StartWithMountFirst 6.12
215 TestMountStart/serial/VerifyMountFirst 0.25
216 TestMountStart/serial/StartWithMountSecond 8.74
217 TestMountStart/serial/VerifyMountSecond 0.26
218 TestMountStart/serial/DeleteFirst 1.62
219 TestMountStart/serial/VerifyMountPostDelete 0.24
220 TestMountStart/serial/Stop 1.19
221 TestMountStart/serial/RestartStopped 7.3
222 TestMountStart/serial/VerifyMountPostStop 0.31
225 TestMultiNode/serial/FreshStart2Nodes 72.7
226 TestMultiNode/serial/DeployApp2Nodes 19.73
227 TestMultiNode/serial/PingHostFrom2Pods 0.98
228 TestMultiNode/serial/AddNode 17.58
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.66
231 TestMultiNode/serial/CopyFile 9.85
232 TestMultiNode/serial/StopNode 2.3
233 TestMultiNode/serial/StartAfterStop 10.15
234 TestMultiNode/serial/RestartKeepsNodes 81.72
235 TestMultiNode/serial/DeleteNode 5.53
236 TestMultiNode/serial/StopMultiNode 24.02
237 TestMultiNode/serial/RestartMultiNode 52.74
238 TestMultiNode/serial/ValidateNameConflict 34.4
243 TestPreload 124.23
245 TestScheduledStopUnix 106.09
248 TestInsufficientStorage 10.67
249 TestRunningBinaryUpgrade 89.91
251 TestKubernetesUpgrade 102.79
252 TestMissingContainerUpgrade 185.22
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 39.19
256 TestNoKubernetes/serial/StartWithStopK8s 18.4
257 TestNoKubernetes/serial/Start 7.59
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
259 TestNoKubernetes/serial/ProfileList 0.98
260 TestNoKubernetes/serial/Stop 1.22
261 TestNoKubernetes/serial/StartNoArgs 6.73
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
263 TestStoppedBinaryUpgrade/Setup 0.62
264 TestStoppedBinaryUpgrade/Upgrade 157.05
273 TestPause/serial/Start 89.23
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
275 TestPause/serial/SecondStartNoReconfiguration 7.64
276 TestPause/serial/Pause 0.9
277 TestPause/serial/VerifyStatus 0.39
278 TestPause/serial/Unpause 0.84
282 TestPause/serial/PauseAgain 1.1
283 TestPause/serial/DeletePaused 2.91
284 TestPause/serial/VerifyDeletedResources 0.15
289 TestNetworkPlugins/group/false 4.87
294 TestStartStop/group/old-k8s-version/serial/FirstStart 146.72
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.74
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.56
298 TestStartStop/group/no-preload/serial/FirstStart 72.61
299 TestStartStop/group/old-k8s-version/serial/Stop 14.61
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
302 TestStartStop/group/no-preload/serial/DeployApp 10.43
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
304 TestStartStop/group/no-preload/serial/Stop 12.03
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
306 TestStartStop/group/no-preload/serial/SecondStart 280.34
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.15
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
310 TestStartStop/group/no-preload/serial/Pause 4.26
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.02
313 TestStartStop/group/embed-certs/serial/FirstStart 88.4
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.14
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
316 TestStartStop/group/old-k8s-version/serial/Pause 3.88
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.72
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.38
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
321 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
322 TestStartStop/group/embed-certs/serial/DeployApp 9.47
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 289.6
325 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
326 TestStartStop/group/embed-certs/serial/Stop 12.39
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
328 TestStartStop/group/embed-certs/serial/SecondStart 271.19
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.26
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
334 TestStartStop/group/embed-certs/serial/Pause 3.59
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.45
338 TestStartStop/group/newest-cni/serial/FirstStart 47.1
339 TestNetworkPlugins/group/auto/Start 97.29
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
342 TestStartStop/group/newest-cni/serial/Stop 1.29
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
344 TestStartStop/group/newest-cni/serial/SecondStart 16.44
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
348 TestStartStop/group/newest-cni/serial/Pause 3.35
349 TestNetworkPlugins/group/flannel/Start 51.92
350 TestNetworkPlugins/group/auto/KubeletFlags 0.4
351 TestNetworkPlugins/group/auto/NetCatPod 9.33
352 TestNetworkPlugins/group/auto/DNS 0.18
353 TestNetworkPlugins/group/auto/Localhost 0.16
354 TestNetworkPlugins/group/auto/HairPin 0.18
355 TestNetworkPlugins/group/flannel/ControllerPod 6.01
356 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
357 TestNetworkPlugins/group/flannel/NetCatPod 10.36
358 TestNetworkPlugins/group/calico/Start 70.04
359 TestNetworkPlugins/group/flannel/DNS 0.25
360 TestNetworkPlugins/group/flannel/Localhost 0.2
361 TestNetworkPlugins/group/flannel/HairPin 0.18
362 TestNetworkPlugins/group/custom-flannel/Start 59.01
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.41
365 TestNetworkPlugins/group/calico/NetCatPod 10.44
366 TestNetworkPlugins/group/calico/DNS 0.23
367 TestNetworkPlugins/group/calico/Localhost 0.17
368 TestNetworkPlugins/group/calico/HairPin 0.15
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.48
371 TestNetworkPlugins/group/custom-flannel/DNS 0.36
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
374 TestNetworkPlugins/group/kindnet/Start 99.43
375 TestNetworkPlugins/group/bridge/Start 50.1
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
377 TestNetworkPlugins/group/bridge/NetCatPod 9.28
378 TestNetworkPlugins/group/bridge/DNS 0.24
379 TestNetworkPlugins/group/bridge/Localhost 0.16
380 TestNetworkPlugins/group/bridge/HairPin 0.16
381 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
382 TestNetworkPlugins/group/enable-default-cni/Start 53.12
383 TestNetworkPlugins/group/kindnet/KubeletFlags 0.64
384 TestNetworkPlugins/group/kindnet/NetCatPod 10.56
385 TestNetworkPlugins/group/kindnet/DNS 0.22
386 TestNetworkPlugins/group/kindnet/Localhost 0.21
387 TestNetworkPlugins/group/kindnet/HairPin 0.19
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.27
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (5.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-862665 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-862665 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.916803268s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0930 10:24:15.060555 2544157 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0930 10:24:15.060651 2544157 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-862665
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-862665: exit status 85 (84.923225ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-862665 | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC |          |
	|         | -p download-only-862665        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:24:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:24:09.186065 2544162 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:24:09.186210 2544162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:24:09.186220 2544162 out.go:358] Setting ErrFile to fd 2...
	I0930 10:24:09.186226 2544162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:24:09.186464 2544162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	W0930 10:24:09.186589 2544162 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19734-2538756/.minikube/config/config.json: open /home/jenkins/minikube-integration/19734-2538756/.minikube/config/config.json: no such file or directory
	I0930 10:24:09.187008 2544162 out.go:352] Setting JSON to true
	I0930 10:24:09.187993 2544162 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":151598,"bootTime":1727540252,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0930 10:24:09.188074 2544162 start.go:139] virtualization:  
	I0930 10:24:09.190948 2544162 out.go:97] [download-only-862665] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0930 10:24:09.191119 2544162 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 10:24:09.191180 2544162 notify.go:220] Checking for updates...
	I0930 10:24:09.192748 2544162 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:24:09.194645 2544162 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:24:09.196528 2544162 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 10:24:09.198393 2544162 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	I0930 10:24:09.200357 2544162 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0930 10:24:09.203737 2544162 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 10:24:09.203986 2544162 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:24:09.224252 2544162 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:24:09.224374 2544162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:24:09.287081 2544162 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:24:09.277355324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:24:09.287201 2544162 docker.go:318] overlay module found
	I0930 10:24:09.289234 2544162 out.go:97] Using the docker driver based on user configuration
	I0930 10:24:09.289260 2544162 start.go:297] selected driver: docker
	I0930 10:24:09.289266 2544162 start.go:901] validating driver "docker" against <nil>
	I0930 10:24:09.289363 2544162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:24:09.333204 2544162 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:24:09.324130571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:24:09.333403 2544162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:24:09.333706 2544162 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0930 10:24:09.333862 2544162 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 10:24:09.336011 2544162 out.go:169] Using Docker driver with root privileges
	I0930 10:24:09.337677 2544162 cni.go:84] Creating CNI manager for ""
	I0930 10:24:09.337748 2544162 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0930 10:24:09.337762 2544162 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 10:24:09.337838 2544162 start.go:340] cluster config:
	{Name:download-only-862665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-862665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:24:09.339857 2544162 out.go:97] Starting "download-only-862665" primary control-plane node in "download-only-862665" cluster
	I0930 10:24:09.339879 2544162 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0930 10:24:09.341717 2544162 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0930 10:24:09.341748 2544162 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0930 10:24:09.341897 2544162 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0930 10:24:09.356711 2544162 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:24:09.357699 2544162 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0930 10:24:09.357800 2544162 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:24:09.436084 2544162 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0930 10:24:09.436108 2544162 cache.go:56] Caching tarball of preloaded images
	I0930 10:24:09.436260 2544162 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0930 10:24:09.438474 2544162 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0930 10:24:09.438517 2544162 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0930 10:24:09.519718 2544162 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0930 10:24:13.439726 2544162 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0930 10:24:13.439829 2544162 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0930 10:24:13.729641 2544162 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	
	
	* The control-plane node download-only-862665 host does not exist
	  To start a cluster, run: "minikube start -p download-only-862665"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-862665
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-833953 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-833953 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.633342556s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0930 10:24:20.121521 2544157 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0930 10:24:20.121558 2544157 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-2538756/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-833953
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-833953: exit status 85 (86.425508ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-862665 | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC |                     |
	|         | -p download-only-862665        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	| delete  | -p download-only-862665        | download-only-862665 | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	| start   | -o=json --download-only        | download-only-833953 | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC |                     |
	|         | -p download-only-833953        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:24:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:24:15.531723 2544364 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:24:15.531836 2544364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:24:15.531845 2544364 out.go:358] Setting ErrFile to fd 2...
	I0930 10:24:15.531851 2544364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:24:15.532091 2544364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	I0930 10:24:15.532484 2544364 out.go:352] Setting JSON to true
	I0930 10:24:15.533344 2544364 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":151604,"bootTime":1727540252,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0930 10:24:15.533421 2544364 start.go:139] virtualization:  
	I0930 10:24:15.535803 2544364 out.go:97] [download-only-833953] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 10:24:15.536025 2544364 notify.go:220] Checking for updates...
	I0930 10:24:15.537796 2544364 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:24:15.539312 2544364 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:24:15.541056 2544364 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 10:24:15.542637 2544364 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	I0930 10:24:15.544176 2544364 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0930 10:24:15.547222 2544364 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 10:24:15.547467 2544364 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:24:15.573881 2544364 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:24:15.574004 2544364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:24:15.627990 2544364 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-30 10:24:15.618222473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:24:15.628140 2544364 docker.go:318] overlay module found
	I0930 10:24:15.630074 2544364 out.go:97] Using the docker driver based on user configuration
	I0930 10:24:15.630096 2544364 start.go:297] selected driver: docker
	I0930 10:24:15.630103 2544364 start.go:901] validating driver "docker" against <nil>
	I0930 10:24:15.630219 2544364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:24:15.681677 2544364 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-30 10:24:15.671888345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:24:15.681901 2544364 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:24:15.682171 2544364 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0930 10:24:15.682360 2544364 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 10:24:15.684492 2544364 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-833953 host does not exist
	  To start a cluster, run: "minikube start -p download-only-833953"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-833953
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0930 10:24:21.347825 2544157 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-263235 --alsologtostderr --binary-mirror http://127.0.0.1:41507 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-263235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-263235
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-472765
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-472765: exit status 85 (72.628681ms)

                                                
                                                
-- stdout --
	* Profile "addons-472765" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-472765"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-472765
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-472765: exit status 85 (78.441373ms)

                                                
                                                
-- stdout --
	* Profile "addons-472765" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-472765"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (217.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-472765 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-472765 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m37.433366733s)
--- PASS: TestAddons/Setup (217.44s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-472765 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-472765 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.781323ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-tbdxk" [54bb36b5-2e90-4a96-b79b-47a74c25caa2] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005955652s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xd6th" [1c60d130-5484-480e-83cf-7713c3c48f30] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003929295s
addons_test.go:338: (dbg) Run:  kubectl --context addons-472765 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-472765 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-472765 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.482194028s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 ip
2024/09/30 10:31:54 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-472765 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-472765 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-472765 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [afcdb026-e7b8-434f-913b-bbe57a2b533e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [afcdb026-e7b8-434f-913b-bbe57a2b533e] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003178886s
I0930 10:33:14.063849 2544157 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-472765 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-472765 addons disable ingress-dns --alsologtostderr -v=1: (1.150155729s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-472765 addons disable ingress --alsologtostderr -v=1: (7.823695569s)
--- PASS: TestAddons/parallel/Ingress (18.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-v6fkb" [3b463d65-66eb-4e74-af61-178132f37d4f] Running
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.036480443s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-472765
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-472765: (6.225844746s)
--- PASS: TestAddons/parallel/InspektorGadget (11.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.524226ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-8r8w8" [8e4539cd-6124-40d5-bcb5-a9852f6ac989] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003907356s
addons_test.go:413: (dbg) Run:  kubectl --context addons-472765 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0930 10:32:20.209125 2544157 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0930 10:32:20.214317 2544157 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0930 10:32:20.214724 2544157 kapi.go:107] duration metric: took 8.323482ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 8.570693ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-472765 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-472765 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5a032666-d14b-4703-b43e-49177c478117] Pending
helpers_test.go:344: "task-pv-pod" [5a032666-d14b-4703-b43e-49177c478117] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5a032666-d14b-4703-b43e-49177c478117] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004432237s
addons_test.go:528: (dbg) Run:  kubectl --context addons-472765 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-472765 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-472765 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-472765 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-472765 delete pod task-pv-pod: (1.083815816s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-472765 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-472765 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-472765 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5aeb26d5-3535-411a-9171-40eb028f3748] Pending
helpers_test.go:344: "task-pv-pod-restore" [5aeb26d5-3535-411a-9171-40eb028f3748] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5aeb26d5-3535-411a-9171-40eb028f3748] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003576392s
addons_test.go:570: (dbg) Run:  kubectl --context addons-472765 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-472765 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-472765 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-472765 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.744656975s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (45.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-472765 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-472765 --alsologtostderr -v=1: (1.667020957s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-zbrn7" [b89e811e-75e3-4483-817d-c87c81c673b0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-zbrn7" [b89e811e-75e3-4483-817d-c87c81c673b0] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.00479424s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-472765 addons disable headlamp --alsologtostderr -v=1: (5.912868429s)
--- PASS: TestAddons/parallel/Headlamp (16.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-6m9r4" [93cf0178-7a5c-49a3-ae95-25dfa6c57a08] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005328331s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-472765
--- PASS: TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.51s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-472765 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-472765 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472765 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [820a6fce-7635-4c2d-a2b2-05096e789fcc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [820a6fce-7635-4c2d-a2b2-05096e789fcc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [820a6fce-7635-4c2d-a2b2-05096e789fcc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.012493066s
addons_test.go:938: (dbg) Run:  kubectl --context addons-472765 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 ssh "cat /opt/local-path-provisioner/pvc-555e2a71-67e1-4347-a389-5ac6d3a60356_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-472765 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-472765 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-472765 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.861022172s)
--- PASS: TestAddons/parallel/LocalPath (52.51s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dzrd4" [ea4e804f-a811-4b14-98c6-2dd6b0db0c84] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.022868695s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-472765
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.74s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2qzzq" [4f7ca5f8-45eb-468d-88cc-ea8a0995d27a] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.008491849s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-472765 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-472765 addons disable yakd --alsologtostderr -v=1: (5.945485898s)
--- PASS: TestAddons/parallel/Yakd (10.95s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-472765
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-472765: (12.002887997s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-472765
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-472765
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-472765
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (37.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-557581 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-557581 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.99040379s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-557581 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-557581 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-557581 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-557581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-557581
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-557581: (1.972216028s)
--- PASS: TestCertOptions (37.59s)

                                                
                                    
x
+
TestCertExpiration (231.07s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-231637 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-231637 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.0117117s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-231637 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-231637 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.676818079s)
helpers_test.go:175: Cleaning up "cert-expiration-231637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-231637
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-231637: (2.374750879s)
--- PASS: TestCertExpiration (231.07s)

                                                
                                    
x
+
TestForceSystemdFlag (33.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-328192 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0930 11:07:59.486632 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-328192 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.062329927s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-328192 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-328192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-328192
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-328192: (2.167499856s)
--- PASS: TestForceSystemdFlag (33.54s)

                                                
                                    
x
+
TestForceSystemdEnv (46.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-216994 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-216994 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.655708153s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-216994 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-216994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-216994
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-216994: (2.432355231s)
--- PASS: TestForceSystemdEnv (46.58s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.36s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-318815 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-318815 --driver=docker  --container-runtime=containerd: (29.965364897s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-318815"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bbqJdxqrnLv6/agent.2566059" SSH_AGENT_PID="2566060" DOCKER_HOST=ssh://docker@127.0.0.1:41308 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bbqJdxqrnLv6/agent.2566059" SSH_AGENT_PID="2566060" DOCKER_HOST=ssh://docker@127.0.0.1:41308 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bbqJdxqrnLv6/agent.2566059" SSH_AGENT_PID="2566060" DOCKER_HOST=ssh://docker@127.0.0.1:41308 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.087104101s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bbqJdxqrnLv6/agent.2566059" SSH_AGENT_PID="2566060" DOCKER_HOST=ssh://docker@127.0.0.1:41308 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-318815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-318815
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-318815: (1.904056631s)
--- PASS: TestDockerEnvContainerd (45.36s)

                                                
                                    
x
+
TestErrorSpam/setup (30.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-232823 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-232823 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-232823 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-232823 --driver=docker  --container-runtime=containerd: (30.042684423s)
--- PASS: TestErrorSpam/setup (30.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (2.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 pause
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 pause: (1.182580165s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 pause
--- PASS: TestErrorSpam/pause (2.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 stop: (1.280011947s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-232823 --log_dir /tmp/nospam-232823 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19734-2538756/.minikube/files/etc/test/nested/copy/2544157/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262469 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-262469 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (58.229620352s)
--- PASS: TestFunctional/serial/StartWithProxy (58.23s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0930 10:36:07.154542 2544157 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262469 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-262469 --alsologtostderr -v=8: (5.851425814s)
functional_test.go:663: soft start took 5.857734116s for "functional-262469" cluster.
I0930 10:36:13.006327 2544157 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (5.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-262469 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-262469 cache add registry.k8s.io/pause:3.1: (1.455057602s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-262469 cache add registry.k8s.io/pause:3.3: (1.359926301s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-262469 cache add registry.k8s.io/pause:latest: (1.050035574s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-262469 /tmp/TestFunctionalserialCacheCmdcacheadd_local375218759/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 cache add minikube-local-cache-test:functional-262469
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 cache delete minikube-local-cache-test:functional-262469
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-262469
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262469 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (306.808478ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-262469 cache reload: (1.04917837s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 kubectl -- --context functional-262469 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-262469 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262469 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-262469 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.511754529s)
functional_test.go:761: restart took 39.511848206s for "functional-262469" cluster.
I0930 10:37:00.487964 2544157 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (39.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-262469 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-262469 logs: (1.739308768s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 logs --file /tmp/TestFunctionalserialLogsFileCmd2806429504/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-262469 logs --file /tmp/TestFunctionalserialLogsFileCmd2806429504/001/logs.txt: (1.757338155s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.55s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-262469 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-262469
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-262469: exit status 115 (665.186307ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32521 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-262469 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262469 config get cpus: exit status 14 (82.405142ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262469 config get cpus: exit status 14 (77.429689ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-262469 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-262469 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2580654: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-262469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (173.262247ms)

                                                
                                                
-- stdout --
	* [functional-262469] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:37:41.167778 2580318 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:37:41.167962 2580318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:37:41.168017 2580318 out.go:358] Setting ErrFile to fd 2...
	I0930 10:37:41.168040 2580318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:37:41.168294 2580318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	I0930 10:37:41.169111 2580318 out.go:352] Setting JSON to false
	I0930 10:37:41.170299 2580318 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":152410,"bootTime":1727540252,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0930 10:37:41.170410 2580318 start.go:139] virtualization:  
	I0930 10:37:41.176048 2580318 out.go:177] * [functional-262469] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 10:37:41.178157 2580318 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:37:41.178227 2580318 notify.go:220] Checking for updates...
	I0930 10:37:41.182901 2580318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:37:41.184971 2580318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 10:37:41.186543 2580318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	I0930 10:37:41.188496 2580318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 10:37:41.190158 2580318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:37:41.192253 2580318 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 10:37:41.192781 2580318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:37:41.227394 2580318 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:37:41.227757 2580318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:37:41.280472 2580318 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:37:41.27059967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:37:41.280585 2580318 docker.go:318] overlay module found
	I0930 10:37:41.283412 2580318 out.go:177] * Using the docker driver based on existing profile
	I0930 10:37:41.284977 2580318 start.go:297] selected driver: docker
	I0930 10:37:41.285001 2580318 start.go:901] validating driver "docker" against &{Name:functional-262469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-262469 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:37:41.285117 2580318 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:37:41.287375 2580318 out.go:201] 
	W0930 10:37:41.289296 2580318 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0930 10:37:41.290967 2580318 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262469 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-262469 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (207.333645ms)

                                                
                                                
-- stdout --
	* [functional-262469] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:37:40.966855 2580273 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:37:40.967042 2580273 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:37:40.967065 2580273 out.go:358] Setting ErrFile to fd 2...
	I0930 10:37:40.967093 2580273 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:37:40.968032 2580273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	I0930 10:37:40.968476 2580273 out.go:352] Setting JSON to false
	I0930 10:37:40.969533 2580273 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":152409,"bootTime":1727540252,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0930 10:37:40.969638 2580273 start.go:139] virtualization:  
	I0930 10:37:40.972158 2580273 out.go:177] * [functional-262469] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0930 10:37:40.974366 2580273 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:37:40.974405 2580273 notify.go:220] Checking for updates...
	I0930 10:37:40.976319 2580273 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:37:40.978076 2580273 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 10:37:40.979746 2580273 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	I0930 10:37:40.981675 2580273 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 10:37:40.983734 2580273 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:37:40.986299 2580273 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 10:37:40.986935 2580273 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:37:41.024135 2580273 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:37:41.024262 2580273 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:37:41.104122 2580273 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:37:41.072754313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:37:41.104238 2580273 docker.go:318] overlay module found
	I0930 10:37:41.106534 2580273 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0930 10:37:41.109195 2580273 start.go:297] selected driver: docker
	I0930 10:37:41.109215 2580273 start.go:901] validating driver "docker" against &{Name:functional-262469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-262469 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:37:41.109329 2580273 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:37:41.112384 2580273 out.go:201] 
	W0930 10:37:41.114833 2580273 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0930 10:37:41.116802 2580273 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-262469 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-262469 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-5k77v" [816f31b3-d1a9-41b0-9b65-8e6a8d3dec1e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-5k77v" [816f31b3-d1a9-41b0-9b65-8e6a8d3dec1e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003188948s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31044
functional_test.go:1675: http://192.168.49.2:31044: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-5k77v

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31044
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d40d19dc-e09f-40f7-ad9e-2a6883909193] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004044749s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-262469 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-262469 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-262469 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-262469 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [101e3e82-e94b-4b86-8f96-e80a887e43da] Pending
helpers_test.go:344: "sp-pod" [101e3e82-e94b-4b86-8f96-e80a887e43da] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [101e3e82-e94b-4b86-8f96-e80a887e43da] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.002866254s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-262469 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-262469 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-262469 delete -f testdata/storage-provisioner/pod.yaml: (1.646521831s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-262469 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cd583293-454a-4045-8c72-788f358cbe37] Pending
helpers_test.go:344: "sp-pod" [cd583293-454a-4045-8c72-788f358cbe37] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cd583293-454a-4045-8c72-788f358cbe37] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004210316s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-262469 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.66s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh -n functional-262469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 cp functional-262469:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3029220069/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh -n functional-262469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh -n functional-262469 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2544157/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "sudo cat /etc/test/nested/copy/2544157/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2544157.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "sudo cat /etc/ssl/certs/2544157.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2544157.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "sudo cat /usr/share/ca-certificates/2544157.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/25441572.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "sudo cat /etc/ssl/certs/25441572.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/25441572.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "sudo cat /usr/share/ca-certificates/25441572.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-262469 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262469 ssh "sudo systemctl is-active docker": exit status 1 (275.726553ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262469 ssh "sudo systemctl is-active crio": exit status 1 (260.597706ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262469 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262469 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-262469 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2577965: os: process already finished
helpers_test.go:502: unable to terminate pid 2577799: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-262469 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262469 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-262469 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ebe0bd72-8712-4f79-8d2e-848ef533fe29] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ebe0bd72-8712-4f79-8d2e-848ef533fe29] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004176234s
I0930 10:37:18.884367 2544157 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-262469 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.75.210 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-262469 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-262469 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-262469 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-zjghv" [5badd270-fba9-4108-97f1-4e2adb9d045c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-zjghv" [5badd270-fba9-4108-97f1-4e2adb9d045c] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004135163s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "383.507389ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "90.272307ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "393.329165ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "63.8381ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 service list -o json
functional_test.go:1494: Took "599.093457ms" to run "out/minikube-linux-arm64 -p functional-262469 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262469 /tmp/TestFunctionalparallelMountCmdany-port3833561487/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727692658092703693" to /tmp/TestFunctionalparallelMountCmdany-port3833561487/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727692658092703693" to /tmp/TestFunctionalparallelMountCmdany-port3833561487/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727692658092703693" to /tmp/TestFunctionalparallelMountCmdany-port3833561487/001/test-1727692658092703693
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262469 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (403.22798ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 10:37:38.497214 2544157 retry.go:31] will retry after 439.040497ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 30 10:37 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 30 10:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 30 10:37 test-1727692658092703693
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh cat /mount-9p/test-1727692658092703693
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-262469 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [609bb767-e80f-4aa9-a530-8a8e2ff37b0e] Pending
helpers_test.go:344: "busybox-mount" [609bb767-e80f-4aa9-a530-8a8e2ff37b0e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [609bb767-e80f-4aa9-a530-8a8e2ff37b0e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [609bb767-e80f-4aa9-a530-8a8e2ff37b0e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.011719709s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-262469 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262469 /tmp/TestFunctionalparallelMountCmdany-port3833561487/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30653
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30653
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262469 /tmp/TestFunctionalparallelMountCmdspecific-port3437710740/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262469 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (506.162488ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 10:37:46.930362 2544157 retry.go:31] will retry after 278.335347ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262469 /tmp/TestFunctionalparallelMountCmdspecific-port3437710740/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262469 ssh "sudo umount -f /mount-9p": exit status 1 (355.072641ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-262469 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262469 /tmp/TestFunctionalparallelMountCmdspecific-port3437710740/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup22439417/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup22439417/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup22439417/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-262469 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup22439417/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup22439417/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262469 /tmp/TestFunctionalparallelMountCmdVerifyCleanup22439417/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-262469 version -o=json --components: (1.191133833s)
--- PASS: TestFunctional/parallel/Version/components (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262469 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-262469
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-262469
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262469 image ls --format short --alsologtostderr:
I0930 10:37:57.827511 2583205 out.go:345] Setting OutFile to fd 1 ...
I0930 10:37:57.827710 2583205 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:57.827723 2583205 out.go:358] Setting ErrFile to fd 2...
I0930 10:37:57.827729 2583205 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:57.827989 2583205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
I0930 10:37:57.828777 2583205 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0930 10:37:57.828962 2583205 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0930 10:37:57.829586 2583205 cli_runner.go:164] Run: docker container inspect functional-262469 --format={{.State.Status}}
I0930 10:37:57.851477 2583205 ssh_runner.go:195] Run: systemctl --version
I0930 10:37:57.851560 2583205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262469
I0930 10:37:57.871976 2583205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41318 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/functional-262469/id_rsa Username:docker}
I0930 10:37:57.964019 2583205 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262469 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| docker.io/library/minikube-local-cache-test | functional-262469  | sha256:dd1345 | 991B   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-262469  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | latest             | sha256:6e8672 | 67.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262469 image ls --format table --alsologtostderr:
I0930 10:37:58.423937 2583360 out.go:345] Setting OutFile to fd 1 ...
I0930 10:37:58.424106 2583360 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:58.424130 2583360 out.go:358] Setting ErrFile to fd 2...
I0930 10:37:58.424150 2583360 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:58.424408 2583360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
I0930 10:37:58.425131 2583360 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0930 10:37:58.425333 2583360 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0930 10:37:58.425991 2583360 cli_runner.go:164] Run: docker container inspect functional-262469 --format={{.State.Status}}
I0930 10:37:58.447545 2583360 ssh_runner.go:195] Run: systemctl --version
I0930 10:37:58.447648 2583360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262469
I0930 10:37:58.467982 2583360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41318 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/functional-262469/id_rsa Username:docker}
I0930 10:37:58.561167 2583360 ssh_runner.go:195] Run: sudo crictl images --output json
E0930 10:37:59.487268 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:37:59.493771 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:37:59.505262 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:37:59.526726 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:37:59.568103 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:37:59.649565 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:37:59.811017 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:38:00.145468 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:38:00.787552 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262469 image ls --format json --alsologtostderr:
[{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"713
00"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-262469"],"size":"2173567"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2","repoDigests":["docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb"],"repoTags":["docker.io/library/nginx:latest"],"size":"67693717"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:d3f53a9
8c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:dd1345cac92db49062950fafb2a951eda2e1c2b13ded7623ba7571188ea3de81","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-262469"],"size":"991"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fb
fc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262469 image ls --format json --alsologtostderr:
I0930 10:37:58.122136 2583275 out.go:345] Setting OutFile to fd 1 ...
I0930 10:37:58.122431 2583275 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:58.122474 2583275 out.go:358] Setting ErrFile to fd 2...
I0930 10:37:58.122498 2583275 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:58.122768 2583275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
I0930 10:37:58.123435 2583275 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0930 10:37:58.123621 2583275 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0930 10:37:58.124170 2583275 cli_runner.go:164] Run: docker container inspect functional-262469 --format={{.State.Status}}
I0930 10:37:58.159338 2583275 ssh_runner.go:195] Run: systemctl --version
I0930 10:37:58.159411 2583275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262469
I0930 10:37:58.196134 2583275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41318 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/functional-262469/id_rsa Username:docker}
I0930 10:37:58.292451 2583275 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262469 image ls --format yaml --alsologtostderr:
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-262469
size: "2173567"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2
repoDigests:
- docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb
repoTags:
- docker.io/library/nginx:latest
size: "67693717"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:dd1345cac92db49062950fafb2a951eda2e1c2b13ded7623ba7571188ea3de81
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-262469
size: "991"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262469 image ls --format yaml --alsologtostderr:
I0930 10:37:57.834493 2583204 out.go:345] Setting OutFile to fd 1 ...
I0930 10:37:57.834868 2583204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:57.834883 2583204 out.go:358] Setting ErrFile to fd 2...
I0930 10:37:57.834889 2583204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:57.835173 2583204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
I0930 10:37:57.835859 2583204 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0930 10:37:57.835992 2583204 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0930 10:37:57.836491 2583204 cli_runner.go:164] Run: docker container inspect functional-262469 --format={{.State.Status}}
I0930 10:37:57.860190 2583204 ssh_runner.go:195] Run: systemctl --version
I0930 10:37:57.860244 2583204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262469
I0930 10:37:57.888160 2583204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41318 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/functional-262469/id_rsa Username:docker}
I0930 10:37:57.983329 2583204 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262469 ssh pgrep buildkitd: exit status 1 (337.01154ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image build -t localhost/my-image:functional-262469 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-262469 image build -t localhost/my-image:functional-262469 testdata/build --alsologtostderr: (3.347997501s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262469 image build -t localhost/my-image:functional-262469 testdata/build --alsologtostderr:
I0930 10:37:58.424601 2583365 out.go:345] Setting OutFile to fd 1 ...
I0930 10:37:58.425504 2583365 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:58.425520 2583365 out.go:358] Setting ErrFile to fd 2...
I0930 10:37:58.425528 2583365 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:58.425805 2583365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
I0930 10:37:58.426640 2583365 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0930 10:37:58.428499 2583365 config.go:182] Loaded profile config "functional-262469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0930 10:37:58.429039 2583365 cli_runner.go:164] Run: docker container inspect functional-262469 --format={{.State.Status}}
I0930 10:37:58.450122 2583365 ssh_runner.go:195] Run: systemctl --version
I0930 10:37:58.450176 2583365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262469
I0930 10:37:58.470530 2583365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41318 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/functional-262469/id_rsa Username:docker}
I0930 10:37:58.562025 2583365 build_images.go:161] Building image from path: /tmp/build.1950329750.tar
I0930 10:37:58.562094 2583365 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0930 10:37:58.583235 2583365 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1950329750.tar
I0930 10:37:58.590299 2583365 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1950329750.tar: stat -c "%s %y" /var/lib/minikube/build/build.1950329750.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1950329750.tar': No such file or directory
I0930 10:37:58.590331 2583365 ssh_runner.go:362] scp /tmp/build.1950329750.tar --> /var/lib/minikube/build/build.1950329750.tar (3072 bytes)
I0930 10:37:58.637042 2583365 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1950329750
I0930 10:37:58.646728 2583365 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1950329750 -xf /var/lib/minikube/build/build.1950329750.tar
I0930 10:37:58.656486 2583365 containerd.go:394] Building image: /var/lib/minikube/build/build.1950329750
I0930 10:37:58.656572 2583365 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1950329750 --local dockerfile=/var/lib/minikube/build/build.1950329750 --output type=image,name=localhost/my-image:functional-262469
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.2s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.3s done
#5 DONE 0.3s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 DONE 0.4s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:5b40e6550936e8b09500bd40a4b308d59dd9740e8d6c979541c4c988d6a37d2f
#8 exporting manifest sha256:5b40e6550936e8b09500bd40a4b308d59dd9740e8d6c979541c4c988d6a37d2f 0.0s done
#8 exporting config sha256:360c0fdb6eee2fcb9aab81b76c564c646ce6c68e82b6e85b8024ba7dce9580bf 0.0s done
#8 naming to localhost/my-image:functional-262469 done
#8 DONE 0.1s
I0930 10:38:01.676770 2583365 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1950329750 --local dockerfile=/var/lib/minikube/build/build.1950329750 --output type=image,name=localhost/my-image:functional-262469: (3.020168185s)
I0930 10:38:01.676848 2583365 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1950329750
I0930 10:38:01.686441 2583365 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1950329750.tar
I0930 10:38:01.695799 2583365 build_images.go:217] Built localhost/my-image:functional-262469 from /tmp/build.1950329750.tar
I0930 10:38:01.695831 2583365 build_images.go:133] succeeded building to: functional-262469
I0930 10:38:01.695836 2583365 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-262469
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image load --daemon kicbase/echo-server:functional-262469 --alsologtostderr
2024/09/30 10:37:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image load --daemon kicbase/echo-server:functional-262469 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-262469 image load --daemon kicbase/echo-server:functional-262469 --alsologtostderr: (1.126372238s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-262469
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image load --daemon kicbase/echo-server:functional-262469 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image save kicbase/echo-server:functional-262469 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image rm kicbase/echo-server:functional-262469 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-262469
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-262469 image save --daemon kicbase/echo-server:functional-262469 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-262469
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-262469
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-262469
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-262469
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-163618 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0930 10:38:09.752881 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:38:19.995893 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:38:40.478017 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:21.439932 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-163618 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m12.854252429s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (133.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (33.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- rollout status deployment/busybox
E0930 10:40:43.362354 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-163618 -- rollout status deployment/busybox: (30.141713775s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-fkghh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-lxt9v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-qrs95 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-fkghh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-lxt9v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-qrs95 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-fkghh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-lxt9v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-qrs95 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (33.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-fkghh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-fkghh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-lxt9v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-lxt9v -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-qrs95 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-163618 -- exec busybox-7dff88458-qrs95 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-163618 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-163618 -v=7 --alsologtostderr: (20.715678588s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-163618 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.038006222s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp testdata/cp-test.txt ha-163618:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1469647871/001/cp-test_ha-163618.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618:/home/docker/cp-test.txt ha-163618-m02:/home/docker/cp-test_ha-163618_ha-163618-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m02 "sudo cat /home/docker/cp-test_ha-163618_ha-163618-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618:/home/docker/cp-test.txt ha-163618-m03:/home/docker/cp-test_ha-163618_ha-163618-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m03 "sudo cat /home/docker/cp-test_ha-163618_ha-163618-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618:/home/docker/cp-test.txt ha-163618-m04:/home/docker/cp-test_ha-163618_ha-163618-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m04 "sudo cat /home/docker/cp-test_ha-163618_ha-163618-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp testdata/cp-test.txt ha-163618-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1469647871/001/cp-test_ha-163618-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m02:/home/docker/cp-test.txt ha-163618:/home/docker/cp-test_ha-163618-m02_ha-163618.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618 "sudo cat /home/docker/cp-test_ha-163618-m02_ha-163618.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m02:/home/docker/cp-test.txt ha-163618-m03:/home/docker/cp-test_ha-163618-m02_ha-163618-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m03 "sudo cat /home/docker/cp-test_ha-163618-m02_ha-163618-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m02:/home/docker/cp-test.txt ha-163618-m04:/home/docker/cp-test_ha-163618-m02_ha-163618-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m04 "sudo cat /home/docker/cp-test_ha-163618-m02_ha-163618-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp testdata/cp-test.txt ha-163618-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1469647871/001/cp-test_ha-163618-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m03:/home/docker/cp-test.txt ha-163618:/home/docker/cp-test_ha-163618-m03_ha-163618.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618 "sudo cat /home/docker/cp-test_ha-163618-m03_ha-163618.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m03:/home/docker/cp-test.txt ha-163618-m02:/home/docker/cp-test_ha-163618-m03_ha-163618-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m02 "sudo cat /home/docker/cp-test_ha-163618-m03_ha-163618-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m03:/home/docker/cp-test.txt ha-163618-m04:/home/docker/cp-test_ha-163618-m03_ha-163618-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m04 "sudo cat /home/docker/cp-test_ha-163618-m03_ha-163618-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp testdata/cp-test.txt ha-163618-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1469647871/001/cp-test_ha-163618-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m04:/home/docker/cp-test.txt ha-163618:/home/docker/cp-test_ha-163618-m04_ha-163618.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618 "sudo cat /home/docker/cp-test_ha-163618-m04_ha-163618.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m04:/home/docker/cp-test.txt ha-163618-m02:/home/docker/cp-test_ha-163618-m04_ha-163618-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m02 "sudo cat /home/docker/cp-test_ha-163618-m04_ha-163618-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 cp ha-163618-m04:/home/docker/cp-test.txt ha-163618-m03:/home/docker/cp-test_ha-163618-m04_ha-163618-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 ssh -n ha-163618-m03 "sudo cat /home/docker/cp-test_ha-163618-m04_ha-163618-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-163618 node stop m02 -v=7 --alsologtostderr: (12.063785485s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr: exit status 7 (752.562665ms)

                                                
                                                
-- stdout --
	ha-163618
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163618-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-163618-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163618-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:41:46.621959 2599691 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:41:46.622136 2599691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:41:46.622175 2599691 out.go:358] Setting ErrFile to fd 2...
	I0930 10:41:46.622183 2599691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:41:46.622571 2599691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	I0930 10:41:46.622861 2599691 out.go:352] Setting JSON to false
	I0930 10:41:46.622922 2599691 mustload.go:65] Loading cluster: ha-163618
	I0930 10:41:46.623140 2599691 notify.go:220] Checking for updates...
	I0930 10:41:46.623419 2599691 config.go:182] Loaded profile config "ha-163618": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 10:41:46.623443 2599691 status.go:174] checking status of ha-163618 ...
	I0930 10:41:46.624474 2599691 cli_runner.go:164] Run: docker container inspect ha-163618 --format={{.State.Status}}
	I0930 10:41:46.649821 2599691 status.go:364] ha-163618 host status = "Running" (err=<nil>)
	I0930 10:41:46.649848 2599691 host.go:66] Checking if "ha-163618" exists ...
	I0930 10:41:46.650253 2599691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-163618
	I0930 10:41:46.679046 2599691 host.go:66] Checking if "ha-163618" exists ...
	I0930 10:41:46.679406 2599691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:41:46.679457 2599691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-163618
	I0930 10:41:46.703496 2599691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41323 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/ha-163618/id_rsa Username:docker}
	I0930 10:41:46.793952 2599691 ssh_runner.go:195] Run: systemctl --version
	I0930 10:41:46.798333 2599691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:41:46.810648 2599691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:41:46.870052 2599691 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-30 10:41:46.859458114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:41:46.871319 2599691 kubeconfig.go:125] found "ha-163618" server: "https://192.168.49.254:8443"
	I0930 10:41:46.871357 2599691 api_server.go:166] Checking apiserver status ...
	I0930 10:41:46.871402 2599691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:41:46.884898 2599691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1425/cgroup
	I0930 10:41:46.894638 2599691 api_server.go:182] apiserver freezer: "11:freezer:/docker/6966d0df201ffb6bf631c8d0cf09343483f78007765794d616c7351a97d1d258/kubepods/burstable/pod63c01d10df8973eb3417bcad6bc2a413/5eec60683febea9c19f81df2f5246f59b6bab6682e81a151b1a36571e7e383b8"
	I0930 10:41:46.894712 2599691 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6966d0df201ffb6bf631c8d0cf09343483f78007765794d616c7351a97d1d258/kubepods/burstable/pod63c01d10df8973eb3417bcad6bc2a413/5eec60683febea9c19f81df2f5246f59b6bab6682e81a151b1a36571e7e383b8/freezer.state
	I0930 10:41:46.903317 2599691 api_server.go:204] freezer state: "THAWED"
	I0930 10:41:46.903349 2599691 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0930 10:41:46.911104 2599691 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0930 10:41:46.911133 2599691 status.go:456] ha-163618 apiserver status = Running (err=<nil>)
	I0930 10:41:46.911143 2599691 status.go:176] ha-163618 status: &{Name:ha-163618 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:41:46.911160 2599691 status.go:174] checking status of ha-163618-m02 ...
	I0930 10:41:46.911469 2599691 cli_runner.go:164] Run: docker container inspect ha-163618-m02 --format={{.State.Status}}
	I0930 10:41:46.927870 2599691 status.go:364] ha-163618-m02 host status = "Stopped" (err=<nil>)
	I0930 10:41:46.927894 2599691 status.go:377] host is not running, skipping remaining checks
	I0930 10:41:46.927900 2599691 status.go:176] ha-163618-m02 status: &{Name:ha-163618-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:41:46.927921 2599691 status.go:174] checking status of ha-163618-m03 ...
	I0930 10:41:46.928241 2599691 cli_runner.go:164] Run: docker container inspect ha-163618-m03 --format={{.State.Status}}
	I0930 10:41:46.944566 2599691 status.go:364] ha-163618-m03 host status = "Running" (err=<nil>)
	I0930 10:41:46.944590 2599691 host.go:66] Checking if "ha-163618-m03" exists ...
	I0930 10:41:46.944909 2599691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-163618-m03
	I0930 10:41:46.961129 2599691 host.go:66] Checking if "ha-163618-m03" exists ...
	I0930 10:41:46.961445 2599691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:41:46.961498 2599691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-163618-m03
	I0930 10:41:46.978747 2599691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41333 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/ha-163618-m03/id_rsa Username:docker}
	I0930 10:41:47.077048 2599691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:41:47.095727 2599691 kubeconfig.go:125] found "ha-163618" server: "https://192.168.49.254:8443"
	I0930 10:41:47.095759 2599691 api_server.go:166] Checking apiserver status ...
	I0930 10:41:47.095800 2599691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:41:47.106583 2599691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1295/cgroup
	I0930 10:41:47.117128 2599691 api_server.go:182] apiserver freezer: "11:freezer:/docker/81c460e7a8790adc9690e872d5ab8a4ea97fff190ad4322727217d591ffcf594/kubepods/burstable/pod4878ae967425db6db9107efd4256de0a/292907259f5855a2f316c91e1537fe50b0ad685e535af0d0e02ef402efd8f291"
	I0930 10:41:47.117198 2599691 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/81c460e7a8790adc9690e872d5ab8a4ea97fff190ad4322727217d591ffcf594/kubepods/burstable/pod4878ae967425db6db9107efd4256de0a/292907259f5855a2f316c91e1537fe50b0ad685e535af0d0e02ef402efd8f291/freezer.state
	I0930 10:41:47.127068 2599691 api_server.go:204] freezer state: "THAWED"
	I0930 10:41:47.127111 2599691 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0930 10:41:47.136566 2599691 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0930 10:41:47.136659 2599691 status.go:456] ha-163618-m03 apiserver status = Running (err=<nil>)
	I0930 10:41:47.136683 2599691 status.go:176] ha-163618-m03 status: &{Name:ha-163618-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:41:47.136731 2599691 status.go:174] checking status of ha-163618-m04 ...
	I0930 10:41:47.137059 2599691 cli_runner.go:164] Run: docker container inspect ha-163618-m04 --format={{.State.Status}}
	I0930 10:41:47.153236 2599691 status.go:364] ha-163618-m04 host status = "Running" (err=<nil>)
	I0930 10:41:47.153260 2599691 host.go:66] Checking if "ha-163618-m04" exists ...
	I0930 10:41:47.153581 2599691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-163618-m04
	I0930 10:41:47.172585 2599691 host.go:66] Checking if "ha-163618-m04" exists ...
	I0930 10:41:47.172923 2599691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:41:47.172968 2599691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-163618-m04
	I0930 10:41:47.197219 2599691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41338 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/ha-163618-m04/id_rsa Username:docker}
	I0930 10:41:47.288783 2599691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:41:47.300119 2599691 status.go:176] ha-163618-m04 status: &{Name:ha-163618-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-163618 node start m02 -v=7 --alsologtostderr: (17.655081422s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr: (1.369904085s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.224528705s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-163618 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-163618 -v=7 --alsologtostderr
E0930 10:42:10.412530 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:10.418978 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:10.430414 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:10.452129 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:10.493554 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:10.575088 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:10.736664 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:11.058088 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:11.700908 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:12.982349 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:15.545031 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:20.666959 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:30.908798 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-163618 -v=7 --alsologtostderr: (37.198928093s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-163618 --wait=true -v=7 --alsologtostderr
E0930 10:42:51.390974 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:59.487021 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:27.203791 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:32.352827 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-163618 --wait=true -v=7 --alsologtostderr: (1m43.138761772s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-163618
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-163618 node delete m03 -v=7 --alsologtostderr: (9.926979776s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 stop -v=7 --alsologtostderr
E0930 10:44:54.277986 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-163618 stop -v=7 --alsologtostderr: (35.992410639s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr: exit status 7 (110.930024ms)

                                                
                                                
-- stdout --
	ha-163618
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-163618-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-163618-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:45:16.646672 2614116 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:45:16.646846 2614116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:45:16.646876 2614116 out.go:358] Setting ErrFile to fd 2...
	I0930 10:45:16.646899 2614116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:45:16.647183 2614116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	I0930 10:45:16.647409 2614116 out.go:352] Setting JSON to false
	I0930 10:45:16.647476 2614116 mustload.go:65] Loading cluster: ha-163618
	I0930 10:45:16.647545 2614116 notify.go:220] Checking for updates...
	I0930 10:45:16.647991 2614116 config.go:182] Loaded profile config "ha-163618": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 10:45:16.648033 2614116 status.go:174] checking status of ha-163618 ...
	I0930 10:45:16.648651 2614116 cli_runner.go:164] Run: docker container inspect ha-163618 --format={{.State.Status}}
	I0930 10:45:16.667454 2614116 status.go:364] ha-163618 host status = "Stopped" (err=<nil>)
	I0930 10:45:16.667474 2614116 status.go:377] host is not running, skipping remaining checks
	I0930 10:45:16.667481 2614116 status.go:176] ha-163618 status: &{Name:ha-163618 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:45:16.667515 2614116 status.go:174] checking status of ha-163618-m02 ...
	I0930 10:45:16.667853 2614116 cli_runner.go:164] Run: docker container inspect ha-163618-m02 --format={{.State.Status}}
	I0930 10:45:16.691763 2614116 status.go:364] ha-163618-m02 host status = "Stopped" (err=<nil>)
	I0930 10:45:16.691784 2614116 status.go:377] host is not running, skipping remaining checks
	I0930 10:45:16.691791 2614116 status.go:176] ha-163618-m02 status: &{Name:ha-163618-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:45:16.691810 2614116 status.go:174] checking status of ha-163618-m04 ...
	I0930 10:45:16.692123 2614116 cli_runner.go:164] Run: docker container inspect ha-163618-m04 --format={{.State.Status}}
	I0930 10:45:16.709828 2614116 status.go:364] ha-163618-m04 host status = "Stopped" (err=<nil>)
	I0930 10:45:16.709855 2614116 status.go:377] host is not running, skipping remaining checks
	I0930 10:45:16.709863 2614116 status.go:176] ha-163618-m04 status: &{Name:ha-163618-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-163618 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-163618 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (54.701628039s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-163618 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-163618 --control-plane -v=7 --alsologtostderr: (42.416621452s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-163618 status -v=7 --alsologtostderr: (1.026380995s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-991684 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0930 10:47:10.412470 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:47:38.122753 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-991684 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (47.392403889s)
--- PASS: TestJSONOutput/start/Command (47.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-991684 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-991684 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-991684 --output=json --user=testUser
E0930 10:47:59.486612 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-991684 --output=json --user=testUser: (5.779439312s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-518334 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-518334 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.55014ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"414bf23a-cc36-4fb3-97d2-07af672596e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-518334] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c331928-5b7d-4b6d-8479-2d888a2314b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"12830c6a-bb8d-4bb7-b9ad-54f11e32223f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8c42eafb-7d74-4efe-8c4a-e24d3340bfce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig"}}
	{"specversion":"1.0","id":"f91afa9d-b601-4f09-a892-3f3652c75e97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube"}}
	{"specversion":"1.0","id":"8ea6b447-f6a7-4c2c-90e9-a4e109dd1540","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e0c4bbe5-0112-4830-bdd1-e9dbf4a41a12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f5481ba6-c722-41b6-a8d3-12b87aa9433d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-518334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-518334
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-486249 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-486249 --network=: (34.980011975s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-486249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-486249
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-486249: (2.063946625s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-474087 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-474087 --network=bridge: (31.445137319s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-474087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-474087
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-474087: (1.987486987s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.45s)

                                                
                                    
x
+
TestKicExistingNetwork (30.37s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0930 10:49:14.731160 2544157 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0930 10:49:14.746345 2544157 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0930 10:49:14.746432 2544157 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0930 10:49:14.746450 2544157 cli_runner.go:164] Run: docker network inspect existing-network
W0930 10:49:14.760245 2544157 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0930 10:49:14.760278 2544157 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0930 10:49:14.760301 2544157 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0930 10:49:14.760408 2544157 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0930 10:49:14.775152 2544157 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-97e64bb4d894 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fb:68:10:57} reservation:<nil>}
I0930 10:49:14.775995 2544157 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a87370}
I0930 10:49:14.776027 2544157 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0930 10:49:14.776079 2544157 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0930 10:49:14.843903 2544157 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-926649 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-926649 --network=existing-network: (28.174809481s)
helpers_test.go:175: Cleaning up "existing-network-926649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-926649
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-926649: (2.00309971s)
I0930 10:49:45.038445 2544157 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.37s)

                                                
                                    
x
+
TestKicCustomSubnet (35.98s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-485951 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-485951 --subnet=192.168.60.0/24: (33.842564993s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-485951 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-485951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-485951
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-485951: (2.117785028s)
--- PASS: TestKicCustomSubnet (35.98s)

                                                
                                    
x
+
TestKicStaticIP (33.02s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-280623 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-280623 --static-ip=192.168.200.200: (30.804433449s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-280623 ip
helpers_test.go:175: Cleaning up "static-ip-280623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-280623
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-280623: (2.076419796s)
--- PASS: TestKicStaticIP (33.02s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (70.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-301873 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-301873 --driver=docker  --container-runtime=containerd: (30.248599839s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-304556 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-304556 --driver=docker  --container-runtime=containerd: (33.867112823s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-301873
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-304556
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-304556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-304556
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-304556: (2.453142435s)
helpers_test.go:175: Cleaning up "first-301873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-301873
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-301873: (1.971153141s)
--- PASS: TestMinikubeProfile (70.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-669778 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-669778 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.116877946s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-669778 ssh -- ls /minikube-host
E0930 10:52:10.411932 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-671599 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-671599 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.734093608s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-671599 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-669778 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-669778 --alsologtostderr -v=5: (1.619791695s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-671599 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-671599
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-671599: (1.19018074s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-671599
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-671599: (6.294444602s)
--- PASS: TestMountStart/serial/RestartStopped (7.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-671599 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (72.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-597306 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0930 10:52:59.487375 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-597306 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m12.175723458s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (72.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (19.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-597306 -- rollout status deployment/busybox: (17.807360581s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- exec busybox-7dff88458-2q77j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- exec busybox-7dff88458-spq6t -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- exec busybox-7dff88458-2q77j -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- exec busybox-7dff88458-spq6t -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- exec busybox-7dff88458-2q77j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- exec busybox-7dff88458-spq6t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (19.73s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- exec busybox-7dff88458-2q77j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- exec busybox-7dff88458-2q77j -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- exec busybox-7dff88458-spq6t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-597306 -- exec busybox-7dff88458-spq6t -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-597306 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-597306 -v 3 --alsologtostderr: (16.94559923s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 status --alsologtostderr
E0930 10:54:22.565749 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/AddNode (17.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-597306 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp testdata/cp-test.txt multinode-597306:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp multinode-597306:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile141727357/001/cp-test_multinode-597306.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp multinode-597306:/home/docker/cp-test.txt multinode-597306-m02:/home/docker/cp-test_multinode-597306_multinode-597306-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m02 "sudo cat /home/docker/cp-test_multinode-597306_multinode-597306-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp multinode-597306:/home/docker/cp-test.txt multinode-597306-m03:/home/docker/cp-test_multinode-597306_multinode-597306-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m03 "sudo cat /home/docker/cp-test_multinode-597306_multinode-597306-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp testdata/cp-test.txt multinode-597306-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp multinode-597306-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile141727357/001/cp-test_multinode-597306-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp multinode-597306-m02:/home/docker/cp-test.txt multinode-597306:/home/docker/cp-test_multinode-597306-m02_multinode-597306.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306 "sudo cat /home/docker/cp-test_multinode-597306-m02_multinode-597306.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp multinode-597306-m02:/home/docker/cp-test.txt multinode-597306-m03:/home/docker/cp-test_multinode-597306-m02_multinode-597306-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m03 "sudo cat /home/docker/cp-test_multinode-597306-m02_multinode-597306-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp testdata/cp-test.txt multinode-597306-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp multinode-597306-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile141727357/001/cp-test_multinode-597306-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp multinode-597306-m03:/home/docker/cp-test.txt multinode-597306:/home/docker/cp-test_multinode-597306-m03_multinode-597306.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306 "sudo cat /home/docker/cp-test_multinode-597306-m03_multinode-597306.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 cp multinode-597306-m03:/home/docker/cp-test.txt multinode-597306-m02:/home/docker/cp-test_multinode-597306-m03_multinode-597306-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 ssh -n multinode-597306-m02 "sudo cat /home/docker/cp-test_multinode-597306-m03_multinode-597306-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-597306 node stop m03: (1.238369981s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-597306 status: exit status 7 (537.961846ms)

                                                
                                                
-- stdout --
	multinode-597306
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-597306-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-597306-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-597306 status --alsologtostderr: exit status 7 (523.633855ms)

                                                
                                                
-- stdout --
	multinode-597306
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-597306-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-597306-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:54:35.390413 2667436 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:54:35.390619 2667436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:54:35.390648 2667436 out.go:358] Setting ErrFile to fd 2...
	I0930 10:54:35.390668 2667436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:54:35.390940 2667436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	I0930 10:54:35.391166 2667436 out.go:352] Setting JSON to false
	I0930 10:54:35.391237 2667436 mustload.go:65] Loading cluster: multinode-597306
	I0930 10:54:35.391319 2667436 notify.go:220] Checking for updates...
	I0930 10:54:35.392741 2667436 config.go:182] Loaded profile config "multinode-597306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 10:54:35.392807 2667436 status.go:174] checking status of multinode-597306 ...
	I0930 10:54:35.393548 2667436 cli_runner.go:164] Run: docker container inspect multinode-597306 --format={{.State.Status}}
	I0930 10:54:35.413093 2667436 status.go:364] multinode-597306 host status = "Running" (err=<nil>)
	I0930 10:54:35.413117 2667436 host.go:66] Checking if "multinode-597306" exists ...
	I0930 10:54:35.413443 2667436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-597306
	I0930 10:54:35.446180 2667436 host.go:66] Checking if "multinode-597306" exists ...
	I0930 10:54:35.446503 2667436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:54:35.446555 2667436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-597306
	I0930 10:54:35.468100 2667436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41443 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/multinode-597306/id_rsa Username:docker}
	I0930 10:54:35.565020 2667436 ssh_runner.go:195] Run: systemctl --version
	I0930 10:54:35.569677 2667436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:54:35.581894 2667436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:54:35.639974 2667436 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-30 10:54:35.628878529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:54:35.640622 2667436 kubeconfig.go:125] found "multinode-597306" server: "https://192.168.67.2:8443"
	I0930 10:54:35.640655 2667436 api_server.go:166] Checking apiserver status ...
	I0930 10:54:35.640703 2667436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:54:35.652252 2667436 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1425/cgroup
	I0930 10:54:35.662301 2667436 api_server.go:182] apiserver freezer: "11:freezer:/docker/4cd4b116287ed18d70189e076b5405e005f19ba3fae63ffa6a9c220c29252bc8/kubepods/burstable/pod34fa80fc6ffaaf29554276e9af70b282/939fa18e9b1d804cfb64793be9fc612303206a2d911c4bd29ffd4c6f5f74ebff"
	I0930 10:54:35.662389 2667436 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4cd4b116287ed18d70189e076b5405e005f19ba3fae63ffa6a9c220c29252bc8/kubepods/burstable/pod34fa80fc6ffaaf29554276e9af70b282/939fa18e9b1d804cfb64793be9fc612303206a2d911c4bd29ffd4c6f5f74ebff/freezer.state
	I0930 10:54:35.671264 2667436 api_server.go:204] freezer state: "THAWED"
	I0930 10:54:35.671301 2667436 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0930 10:54:35.679294 2667436 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0930 10:54:35.679326 2667436 status.go:456] multinode-597306 apiserver status = Running (err=<nil>)
	I0930 10:54:35.679337 2667436 status.go:176] multinode-597306 status: &{Name:multinode-597306 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:54:35.679382 2667436 status.go:174] checking status of multinode-597306-m02 ...
	I0930 10:54:35.679755 2667436 cli_runner.go:164] Run: docker container inspect multinode-597306-m02 --format={{.State.Status}}
	I0930 10:54:35.695671 2667436 status.go:364] multinode-597306-m02 host status = "Running" (err=<nil>)
	I0930 10:54:35.695698 2667436 host.go:66] Checking if "multinode-597306-m02" exists ...
	I0930 10:54:35.696006 2667436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-597306-m02
	I0930 10:54:35.712742 2667436 host.go:66] Checking if "multinode-597306-m02" exists ...
	I0930 10:54:35.713069 2667436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:54:35.713118 2667436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-597306-m02
	I0930 10:54:35.729799 2667436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41448 SSHKeyPath:/home/jenkins/minikube-integration/19734-2538756/.minikube/machines/multinode-597306-m02/id_rsa Username:docker}
	I0930 10:54:35.821295 2667436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:54:35.834921 2667436 status.go:176] multinode-597306-m02 status: &{Name:multinode-597306-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:54:35.835017 2667436 status.go:174] checking status of multinode-597306-m03 ...
	I0930 10:54:35.835380 2667436 cli_runner.go:164] Run: docker container inspect multinode-597306-m03 --format={{.State.Status}}
	I0930 10:54:35.852301 2667436 status.go:364] multinode-597306-m03 host status = "Stopped" (err=<nil>)
	I0930 10:54:35.852326 2667436 status.go:377] host is not running, skipping remaining checks
	I0930 10:54:35.852334 2667436 status.go:176] multinode-597306-m03 status: &{Name:multinode-597306-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-597306 node start m03 -v=7 --alsologtostderr: (9.279153809s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-597306
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-597306
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-597306: (25.189446303s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-597306 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-597306 --wait=true -v=8 --alsologtostderr: (56.422974938s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-597306
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-597306 node delete m03: (4.872307725s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-597306 stop: (23.820497451s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-597306 status: exit status 7 (96.772822ms)

                                                
                                                
-- stdout --
	multinode-597306
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-597306-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-597306 status --alsologtostderr: exit status 7 (102.109336ms)

                                                
                                                
-- stdout --
	multinode-597306
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-597306-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:56:37.228998 2675862 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:56:37.229155 2675862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:56:37.229167 2675862 out.go:358] Setting ErrFile to fd 2...
	I0930 10:56:37.229173 2675862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:56:37.229439 2675862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	I0930 10:56:37.229619 2675862 out.go:352] Setting JSON to false
	I0930 10:56:37.229656 2675862 mustload.go:65] Loading cluster: multinode-597306
	I0930 10:56:37.230106 2675862 notify.go:220] Checking for updates...
	I0930 10:56:37.230791 2675862 config.go:182] Loaded profile config "multinode-597306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0930 10:56:37.230824 2675862 status.go:174] checking status of multinode-597306 ...
	I0930 10:56:37.232398 2675862 cli_runner.go:164] Run: docker container inspect multinode-597306 --format={{.State.Status}}
	I0930 10:56:37.250496 2675862 status.go:364] multinode-597306 host status = "Stopped" (err=<nil>)
	I0930 10:56:37.250518 2675862 status.go:377] host is not running, skipping remaining checks
	I0930 10:56:37.250525 2675862 status.go:176] multinode-597306 status: &{Name:multinode-597306 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:56:37.250571 2675862 status.go:174] checking status of multinode-597306-m02 ...
	I0930 10:56:37.250883 2675862 cli_runner.go:164] Run: docker container inspect multinode-597306-m02 --format={{.State.Status}}
	I0930 10:56:37.275887 2675862 status.go:364] multinode-597306-m02 host status = "Stopped" (err=<nil>)
	I0930 10:56:37.275925 2675862 status.go:377] host is not running, skipping remaining checks
	I0930 10:56:37.275932 2675862 status.go:176] multinode-597306-m02 status: &{Name:multinode-597306-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-597306 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0930 10:57:10.412358 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-597306 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.069904733s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-597306 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-597306
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-597306-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-597306-m02 --driver=docker  --container-runtime=containerd: exit status 14 (128.10137ms)

                                                
                                                
-- stdout --
	* [multinode-597306-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-597306-m02' is duplicated with machine name 'multinode-597306-m02' in profile 'multinode-597306'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-597306-m03 --driver=docker  --container-runtime=containerd
E0930 10:57:59.486763 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-597306-m03 --driver=docker  --container-runtime=containerd: (31.915874606s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-597306
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-597306: exit status 80 (335.992017ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-597306 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-597306-m03 already exists in multinode-597306-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-597306-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-597306-m03: (1.936024762s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.40s)

                                                
                                    
x
+
TestPreload (124.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-972499 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0930 10:58:33.484134 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-972499 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m26.639379517s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-972499 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-972499 image pull gcr.io/k8s-minikube/busybox: (2.048828285s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-972499
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-972499: (12.0505957s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-972499 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-972499 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.65555438s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-972499 image list
helpers_test.go:175: Cleaning up "test-preload-972499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-972499
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-972499: (2.525315966s)
--- PASS: TestPreload (124.23s)

                                                
                                    
x
+
TestScheduledStopUnix (106.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-122763 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-122763 --memory=2048 --driver=docker  --container-runtime=containerd: (29.805652387s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-122763 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-122763 -n scheduled-stop-122763
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-122763 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0930 11:00:42.870471 2544157 retry.go:31] will retry after 71.049µs: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.871597 2544157 retry.go:31] will retry after 203.995µs: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.872732 2544157 retry.go:31] will retry after 334.932µs: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.873815 2544157 retry.go:31] will retry after 175.105µs: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.874909 2544157 retry.go:31] will retry after 756.055µs: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.876007 2544157 retry.go:31] will retry after 792.75µs: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.877106 2544157 retry.go:31] will retry after 997.868µs: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.878214 2544157 retry.go:31] will retry after 1.72387ms: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.880372 2544157 retry.go:31] will retry after 2.400062ms: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.883534 2544157 retry.go:31] will retry after 2.013322ms: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.885675 2544157 retry.go:31] will retry after 4.05738ms: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.889827 2544157 retry.go:31] will retry after 5.139211ms: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.896010 2544157 retry.go:31] will retry after 11.387533ms: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.908225 2544157 retry.go:31] will retry after 27.687441ms: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.936451 2544157 retry.go:31] will retry after 25.624922ms: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
I0930 11:00:42.963175 2544157 retry.go:31] will retry after 29.204295ms: open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/scheduled-stop-122763/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-122763 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-122763 -n scheduled-stop-122763
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-122763
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-122763 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-122763
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-122763: exit status 7 (67.335313ms)

                                                
                                                
-- stdout --
	scheduled-stop-122763
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-122763 -n scheduled-stop-122763
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-122763 -n scheduled-stop-122763: exit status 7 (65.808152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-122763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-122763
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-122763: (4.800322972s)
--- PASS: TestScheduledStopUnix (106.09s)

                                                
                                    
x
+
TestInsufficientStorage (10.67s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-225173 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-225173 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.262909432s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f932d986-fdc5-4d53-871c-6a16006945e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-225173] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8eea40a3-bf61-42f4-940b-a16bc087f06f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"5b9dc590-da89-492c-9c77-371718960e46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e8f44f84-2908-4732-a43f-89f49ce7740b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig"}}
	{"specversion":"1.0","id":"7843da3e-2735-41e8-8b9a-936e250179c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube"}}
	{"specversion":"1.0","id":"6a30f37c-2d94-49ae-a65a-670f4f885e4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"95022f2f-36f2-4c37-a609-a559d222aa88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a7d12419-3401-49b7-8c80-bb9c96fdd188","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"654b681d-5133-4976-b167-1aa3a4505870","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1c2b64d0-97da-4b3d-b4c8-cee965575de1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"11353d28-833f-46a8-a7e0-84d1c489bdb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6530499a-bd08-4589-b3e6-d7c550587f60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-225173\" primary control-plane node in \"insufficient-storage-225173\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f617918d-70df-45ce-a971-917f40d19830","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"851f202f-0099-4469-8f21-fdd6dd86b945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"534e991c-d753-4e81-a7d3-d9d4feecbe8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-225173 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-225173 --output=json --layout=cluster: exit status 7 (273.359249ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-225173","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-225173","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:02:07.191101 2694468 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-225173" does not appear in /home/jenkins/minikube-integration/19734-2538756/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-225173 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-225173 --output=json --layout=cluster: exit status 7 (272.906424ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-225173","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-225173","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:02:07.465268 2694532 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-225173" does not appear in /home/jenkins/minikube-integration/19734-2538756/kubeconfig
	E0930 11:02:07.475284 2694532 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/insufficient-storage-225173/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-225173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-225173
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-225173: (1.863210612s)
--- PASS: TestInsufficientStorage (10.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (89.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2495115560 start -p running-upgrade-184826 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2495115560 start -p running-upgrade-184826 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.010869579s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-184826 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-184826 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.243927952s)
helpers_test.go:175: Cleaning up "running-upgrade-184826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-184826
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-184826: (2.891614401s)
--- PASS: TestRunningBinaryUpgrade (89.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (102.79s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-160245 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-160245 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m0.24768007s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-160245
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-160245: (1.377356056s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-160245 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-160245 status --format={{.Host}}: exit status 7 (91.328811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-160245 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-160245 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.411589048s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-160245 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-160245 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-160245 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (126.387254ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-160245] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-160245
	    minikube start -p kubernetes-upgrade-160245 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1602452 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-160245 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-160245 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-160245 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.03622498s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-160245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-160245
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-160245: (2.313029917s)
--- PASS: TestKubernetesUpgrade (102.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (185.22s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1291363115 start -p missing-upgrade-488474 --memory=2200 --driver=docker  --container-runtime=containerd
E0930 11:02:10.412690 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1291363115 start -p missing-upgrade-488474 --memory=2200 --driver=docker  --container-runtime=containerd: (1m36.282945318s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-488474
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-488474: (10.337298326s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-488474
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-488474 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-488474 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m14.536064328s)
helpers_test.go:175: Cleaning up "missing-upgrade-488474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-488474
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-488474: (2.991371894s)
--- PASS: TestMissingContainerUpgrade (185.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-727325 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-727325 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (80.774138ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-727325] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-727325 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-727325 --driver=docker  --container-runtime=containerd: (38.493097973s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-727325 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-727325 --no-kubernetes --driver=docker  --container-runtime=containerd
E0930 11:02:59.487367 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-727325 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.200040553s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-727325 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-727325 status -o json: exit status 2 (357.613695ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-727325","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-727325
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-727325: (1.83837148s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-727325 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-727325 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.587355754s)
--- PASS: TestNoKubernetes/serial/Start (7.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-727325 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-727325 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.004725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-727325
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-727325: (1.216298335s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-727325 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-727325 --driver=docker  --container-runtime=containerd: (6.73306944s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-727325 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-727325 "sudo systemctl is-active --quiet service kubelet": exit status 1 (335.794824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (157.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2085007222 start -p stopped-upgrade-312020 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2085007222 start -p stopped-upgrade-312020 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (41.054994749s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2085007222 -p stopped-upgrade-312020 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2085007222 -p stopped-upgrade-312020 stop: (20.533222697s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-312020 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-312020 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m35.457327042s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (157.05s)

                                                
                                    
x
+
TestPause/serial/Start (89.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-231445 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0930 11:07:10.412446 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-231445 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m29.229651824s)
--- PASS: TestPause/serial/Start (89.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-312020
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-312020: (1.125137365s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-231445 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-231445 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.615086111s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.64s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-231445 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-231445 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-231445 --output=json --layout=cluster: exit status 2 (385.508441ms)

                                                
                                                
-- stdout --
	{"Name":"pause-231445","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-231445","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-231445 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-231445 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-231445 --alsologtostderr -v=5: (1.103590721s)
--- PASS: TestPause/serial/PauseAgain (1.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.91s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-231445 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-231445 --alsologtostderr -v=5: (2.911747326s)
--- PASS: TestPause/serial/DeletePaused (2.91s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-231445
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-231445: exit status 1 (25.591272ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-231445: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-140647 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-140647 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (195.99993ms)

                                                
                                                
-- stdout --
	* [false-140647] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:08:27.692085 2731609 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:08:27.692304 2731609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:08:27.692330 2731609 out.go:358] Setting ErrFile to fd 2...
	I0930 11:08:27.692351 2731609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:08:27.692625 2731609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2538756/.minikube/bin
	I0930 11:08:27.693067 2731609 out.go:352] Setting JSON to false
	I0930 11:08:27.694089 2731609 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":154256,"bootTime":1727540252,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0930 11:08:27.694197 2731609 start.go:139] virtualization:  
	I0930 11:08:27.701219 2731609 out.go:177] * [false-140647] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 11:08:27.704435 2731609 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:08:27.704616 2731609 notify.go:220] Checking for updates...
	I0930 11:08:27.709933 2731609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:08:27.713096 2731609 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-2538756/kubeconfig
	I0930 11:08:27.716739 2731609 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2538756/.minikube
	I0930 11:08:27.718929 2731609 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 11:08:27.721115 2731609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:08:27.723975 2731609 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:08:27.747905 2731609 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 11:08:27.748035 2731609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 11:08:27.809684 2731609 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-30 11:08:27.799116382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 11:08:27.809798 2731609 docker.go:318] overlay module found
	I0930 11:08:27.813142 2731609 out.go:177] * Using the docker driver based on user configuration
	I0930 11:08:27.814989 2731609 start.go:297] selected driver: docker
	I0930 11:08:27.815006 2731609 start.go:901] validating driver "docker" against <nil>
	I0930 11:08:27.815021 2731609 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:08:27.817646 2731609 out.go:201] 
	W0930 11:08:27.819580 2731609 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0930 11:08:27.821115 2731609 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-140647 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-140647" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-140647" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-140647

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140647"

                                                
                                                
----------------------- debugLogs end: false-140647 [took: 4.465257665s] --------------------------------
helpers_test.go:175: Cleaning up "false-140647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-140647
--- PASS: TestNetworkPlugins/group/false (4.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (146.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-852171 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0930 11:11:02.567824 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:10.412349 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-852171 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m26.720489592s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (146.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-852171 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [24f0db37-870b-47d6-9bc3-4520262f92d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [24f0db37-870b-47d6-9bc3-4520262f92d0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005872677s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-852171 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-852171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-852171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.281837999s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-852171 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-935352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-935352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m12.605869848s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-852171 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-852171 --alsologtostderr -v=3: (14.606588456s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-852171 -n old-k8s-version-852171
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-852171 -n old-k8s-version-852171: exit status 7 (105.106571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-852171 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-935352 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0d18f276-d962-40f9-9a03-c33a8bf44abd] Pending
helpers_test.go:344: "busybox" [0d18f276-d962-40f9-9a03-c33a8bf44abd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0d18f276-d962-40f9-9a03-c33a8bf44abd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003707484s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-935352 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-935352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-935352 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-935352 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-935352 --alsologtostderr -v=3: (12.028413544s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-935352 -n no-preload-935352
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-935352 -n no-preload-935352: exit status 7 (82.019899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-935352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (280.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-935352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0930 11:15:13.485607 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:10.412007 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:59.487252 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-935352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m39.860887031s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-935352 -n no-preload-935352
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (280.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jfr67" [4951c615-4c9c-426d-a594-90cc169bf6bc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003583965s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jfr67" [4951c615-4c9c-426d-a594-90cc169bf6bc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004863196s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-935352 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-935352 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-935352 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-935352 --alsologtostderr -v=1: (1.108555273s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-935352 -n no-preload-935352
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-935352 -n no-preload-935352: exit status 2 (396.595873ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-935352 -n no-preload-935352
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-935352 -n no-preload-935352: exit status 2 (394.613194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-935352 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-935352 --alsologtostderr -v=1: (1.203079449s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-935352 -n no-preload-935352
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-935352 -n no-preload-935352
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d7mlj" [f702e364-fc94-418f-9900-502a6ebb233e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.015577296s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-446814 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-446814 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m28.396149565s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d7mlj" [f702e364-fc94-418f-9900-502a6ebb233e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004216304s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-852171 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-852171 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-852171 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-852171 --alsologtostderr -v=1: (1.00891539s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-852171 -n old-k8s-version-852171
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-852171 -n old-k8s-version-852171: exit status 2 (426.680064ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-852171 -n old-k8s-version-852171
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-852171 -n old-k8s-version-852171: exit status 2 (399.039058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-852171 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-852171 --alsologtostderr -v=1: (1.123913217s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-852171 -n old-k8s-version-852171
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-852171 -n old-k8s-version-852171
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-518878 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-518878 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (56.720994128s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-518878 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0c146548-6ab5-4ef4-83c2-9de028c7af20] Pending
helpers_test.go:344: "busybox" [0c146548-6ab5-4ef4-83c2-9de028c7af20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0c146548-6ab5-4ef4-83c2-9de028c7af20] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003502427s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-518878 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-518878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-518878 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-518878 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-518878 --alsologtostderr -v=3: (12.115656057s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-446814 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [29a7c576-0517-418c-8f7c-a63ecfa61fd3] Pending
helpers_test.go:344: "busybox" [29a7c576-0517-418c-8f7c-a63ecfa61fd3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [29a7c576-0517-418c-8f7c-a63ecfa61fd3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.023584578s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-446814 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-518878 -n default-k8s-diff-port-518878
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-518878 -n default-k8s-diff-port-518878: exit status 7 (83.117065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-518878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-518878 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-518878 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m49.228859678s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-518878 -n default-k8s-diff-port-518878
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-446814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-446814 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-446814 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-446814 --alsologtostderr -v=3: (12.385131773s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-446814 -n embed-certs-446814
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-446814 -n embed-certs-446814: exit status 7 (138.976993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-446814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (271.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-446814 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0930 11:22:10.412171 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:19.061782 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:19.068219 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:19.079731 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:19.101114 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:19.142784 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:19.224214 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:19.385781 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:19.707658 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:20.349106 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:21.630455 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:24.191964 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:29.313799 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:39.555170 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:22:59.487000 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:00.041649 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:41.011995 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:41.855574 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:41.862092 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:41.873455 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:41.895464 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:41.936864 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:42.018413 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:42.180574 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:42.502190 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:43.143875 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:44.425414 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:46.986834 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:23:52.108611 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:24:02.350102 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:24:22.831788 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:25:02.933365 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:25:03.793293 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-446814 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m30.77398557s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-446814 -n embed-certs-446814
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (271.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hc8ll" [210b6ac5-1ad8-416d-bc9d-bea0605ed518] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003607059s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-27xh5" [b9ab860e-3d5e-4f2d-a49d-90783c2e58fb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004489347s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hc8ll" [210b6ac5-1ad8-416d-bc9d-bea0605ed518] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00369776s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-446814 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-27xh5" [b9ab860e-3d5e-4f2d-a49d-90783c2e58fb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004211443s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-518878 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-446814 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-446814 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-446814 -n embed-certs-446814
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-446814 -n embed-certs-446814: exit status 2 (319.951609ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-446814 -n embed-certs-446814
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-446814 -n embed-certs-446814: exit status 2 (366.875438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-446814 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-446814 -n embed-certs-446814
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-446814 -n embed-certs-446814
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-518878 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-518878 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-518878 --alsologtostderr -v=1: (1.39129029s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-518878 -n default-k8s-diff-port-518878
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-518878 -n default-k8s-diff-port-518878: exit status 2 (411.639637ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-518878 -n default-k8s-diff-port-518878
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-518878 -n default-k8s-diff-port-518878: exit status 2 (341.154091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-518878 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-518878 -n default-k8s-diff-port-518878
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-518878 -n default-k8s-diff-port-518878
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-230756 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-230756 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (47.094976395s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (97.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0930 11:26:25.715032 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m37.289612518s)
--- PASS: TestNetworkPlugins/group/auto/Start (97.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-230756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-230756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.133243908s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-230756 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-230756 --alsologtostderr -v=3: (1.29019023s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-230756 -n newest-cni-230756
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-230756 -n newest-cni-230756: exit status 7 (71.016177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-230756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-230756 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-230756 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (16.066360386s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-230756 -n newest-cni-230756
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-230756 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-230756 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-230756 -n newest-cni-230756
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-230756 -n newest-cni-230756: exit status 2 (306.942274ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-230756 -n newest-cni-230756
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-230756 -n newest-cni-230756: exit status 2 (304.712286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-230756 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-230756 -n newest-cni-230756
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-230756 -n newest-cni-230756
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.35s)
E0930 11:32:19.061542 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0930 11:27:10.411878 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/functional-262469/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:27:19.061586 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/old-k8s-version-852171/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (51.924217076s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-140647 "pgrep -a kubelet"
I0930 11:27:27.250214 2544157 config.go:182] Loaded profile config "auto-140647": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-140647 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wmdm7" [bb21c6bf-3ca4-481a-b15a-55d25d797aa3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wmdm7" [bb21c6bf-3ca4-481a-b15a-55d25d797aa3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004851789s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-140647 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-j5w72" [328a7afc-33b8-488d-805d-5e44241cbad5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004019346s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-140647 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-140647 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rnmbx" [da60fbde-7f77-443f-97fa-84fd9e2dba94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rnmbx" [da60fbde-7f77-443f-97fa-84fd9e2dba94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003947384s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0930 11:27:59.487114 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/addons-472765/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m10.044082356s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-140647 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0930 11:28:41.856233 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.010402947s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xqg7m" [57697192-0bea-4b79-92d5-b1a2207f01e0] Running
E0930 11:29:09.556950 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/no-preload-935352/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005137474s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-140647 "pgrep -a kubelet"
I0930 11:29:15.184034 2544157 config.go:182] Loaded profile config "calico-140647": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-140647 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vqbt7" [df373092-7ea5-4411-a0c4-add223a3bcf3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vqbt7" [df373092-7ea5-4411-a0c4-add223a3bcf3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00453241s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-140647 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-140647 "pgrep -a kubelet"
I0930 11:29:31.509976 2544157 config.go:182] Loaded profile config "custom-flannel-140647": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-140647 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v9phf" [41ce19d0-3e64-4514-8af9-293d6d608448] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-v9phf" [41ce19d0-3e64-4514-8af9-293d6d608448] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003930699s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-140647 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (99.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m39.434565763s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (99.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0930 11:30:17.722325 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:17.728701 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:17.740016 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:17.761362 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:17.802747 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:17.884131 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:18.045613 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:18.367543 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:19.009668 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:20.290979 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:22.852145 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:27.974017 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:38.216145 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:30:58.698071 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (50.102700485s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-140647 "pgrep -a kubelet"
I0930 11:30:59.141484 2544157 config.go:182] Loaded profile config "bridge-140647": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-140647 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9wrjs" [af662a52-71aa-46b4-ac3d-4881c896a721] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9wrjs" [af662a52-71aa-46b4-ac3d-4881c896a721] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003727811s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-140647 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rw52m" [961e8b97-7cf5-4483-a4ed-7bc41524e95b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004420505s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-140647 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (53.12019225s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-140647 "pgrep -a kubelet"
I0930 11:31:35.292017 2544157 config.go:182] Loaded profile config "kindnet-140647": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-140647 replace --force -f testdata/netcat-deployment.yaml
I0930 11:31:35.769774 2544157 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rvrrg" [ab008e03-e662-44fb-8bef-587dbfda48c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0930 11:31:39.660092 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/default-k8s-diff-port-518878/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-rvrrg" [ab008e03-e662-44fb-8bef-587dbfda48c1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004025618s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-140647 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-140647 "pgrep -a kubelet"
I0930 11:32:22.357870 2544157 config.go:182] Loaded profile config "enable-default-cni-140647": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-140647 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xt28p" [20916b76-7daf-4083-854c-34826bf1370a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xt28p" [20916b76-7daf-4083-854c-34826bf1370a] Running
E0930 11:32:27.551743 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/auto-140647/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:27.558153 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/auto-140647/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:27.569603 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/auto-140647/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:27.591053 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/auto-140647/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:27.632786 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/auto-140647/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:27.714319 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/auto-140647/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:27.876248 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/auto-140647/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:28.198169 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/auto-140647/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:28.839746 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/auto-140647/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:30.122447 2544157 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/auto-140647/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004437643s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-140647 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-140647 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-902924 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-902924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-902924
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-166520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-166520
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-140647 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-140647" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-140647" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19734-2538756/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:08:19 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-231445
contexts:
- context:
cluster: pause-231445
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:08:19 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-231445
name: pause-231445
current-context: pause-231445
kind: Config
preferences: {}
users:
- name: pause-231445
user:
client-certificate: /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/pause-231445/client.crt
client-key: /home/jenkins/minikube-integration/19734-2538756/.minikube/profiles/pause-231445/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-140647

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140647"

                                                
                                                
----------------------- debugLogs end: kubenet-140647 [took: 4.436884444s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-140647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-140647
--- SKIP: TestNetworkPlugins/group/kubenet (4.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-140647 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-140647" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-140647

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-140647" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140647"

                                                
                                                
----------------------- debugLogs end: cilium-140647 [took: 5.234655246s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-140647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-140647
--- SKIP: TestNetworkPlugins/group/cilium (5.43s)

                                                
                                    
Copied to clipboard