Test Report: Docker_Linux_containerd_arm64 19667

                    
                      39f19baf3a7e1c810682dda0eb22abd909c6f2ab:2024-09-18:36273
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.86
302 TestStartStop/group/old-k8s-version/serial/SecondStart 374.46
x
+
TestAddons/serial/Volcano (199.86s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 56.211957ms
addons_test.go:897: volcano-scheduler stabilized in 56.745428ms
addons_test.go:913: volcano-controller stabilized in 56.808156ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-gn4g8" [79a13bed-f3af-47b5-b111-1fb205665172] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003420428s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-f9m4v" [6c21521d-5971-416c-a529-4ae2973bcb55] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00414791s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-dzjcw" [8353b102-dc5c-4a2d-baee-e5504cd4b9dd] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003452015s
addons_test.go:932: (dbg) Run:  kubectl --context addons-287708 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-287708 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-287708 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [464d0335-d734-4133-9ba3-21e516c826a7] Pending
helpers_test.go:344: "test-job-nginx-0" [464d0335-d734-4133-9ba3-21e516c826a7] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-287708 -n addons-287708
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-18 20:32:49.572991777 +0000 UTC m=+440.220514385
addons_test.go:964: (dbg) Run:  kubectl --context addons-287708 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-287708 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-e5cf2c8f-56f5-4311-a4cd-6979b3ce27ff
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vf2dz (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-vf2dz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m58s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-287708 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-287708 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-287708
helpers_test.go:235: (dbg) docker inspect addons-287708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f9c77061a0d1473f1db35695cef8c482b948726555b159884baef542d56e247",
	        "Created": "2024-09-18T20:26:18.369405848Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 880745,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-18T20:26:18.504840811Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/7f9c77061a0d1473f1db35695cef8c482b948726555b159884baef542d56e247/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f9c77061a0d1473f1db35695cef8c482b948726555b159884baef542d56e247/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f9c77061a0d1473f1db35695cef8c482b948726555b159884baef542d56e247/hosts",
	        "LogPath": "/var/lib/docker/containers/7f9c77061a0d1473f1db35695cef8c482b948726555b159884baef542d56e247/7f9c77061a0d1473f1db35695cef8c482b948726555b159884baef542d56e247-json.log",
	        "Name": "/addons-287708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-287708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-287708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5110bb1fd25dc4a6ab58db21ee742583cce45d6f83b442d81bfa3ae5d49b6e18-init/diff:/var/lib/docker/overlay2/e15030a03ca75c521300a5809bba283a333356a542417dabfffce840b03425c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5110bb1fd25dc4a6ab58db21ee742583cce45d6f83b442d81bfa3ae5d49b6e18/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5110bb1fd25dc4a6ab58db21ee742583cce45d6f83b442d81bfa3ae5d49b6e18/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5110bb1fd25dc4a6ab58db21ee742583cce45d6f83b442d81bfa3ae5d49b6e18/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-287708",
	                "Source": "/var/lib/docker/volumes/addons-287708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-287708",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-287708",
	                "name.minikube.sigs.k8s.io": "addons-287708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "78c96898b7da57659dfce6446495b6d46d1b67c2c61a35e905c09ac09a318b41",
	            "SandboxKey": "/var/run/docker/netns/78c96898b7da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33883"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-287708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d69915eba22191faef4f15c0c2695dfb488114402e0cc68299bc730e54e3a1ba",
	                    "EndpointID": "a0cd951a98321af40acb35a242e4cdded5c7f25a337b4a2457f822daa4255d29",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-287708",
	                        "7f9c77061a0d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-287708 -n addons-287708
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-287708 logs -n 25: (1.600513212s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-045343   | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC |                     |
	|         | -p download-only-045343              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC | 18 Sep 24 20:25 UTC |
	| delete  | -p download-only-045343              | download-only-045343   | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC | 18 Sep 24 20:25 UTC |
	| start   | -o=json --download-only              | download-only-879156   | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC |                     |
	|         | -p download-only-879156              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC | 18 Sep 24 20:25 UTC |
	| delete  | -p download-only-879156              | download-only-879156   | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC | 18 Sep 24 20:25 UTC |
	| delete  | -p download-only-045343              | download-only-045343   | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC | 18 Sep 24 20:25 UTC |
	| delete  | -p download-only-879156              | download-only-879156   | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC | 18 Sep 24 20:25 UTC |
	| start   | --download-only -p                   | download-docker-222440 | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC |                     |
	|         | download-docker-222440               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-222440            | download-docker-222440 | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC | 18 Sep 24 20:25 UTC |
	| start   | --download-only -p                   | binary-mirror-179296   | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC |                     |
	|         | binary-mirror-179296                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37755               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-179296              | binary-mirror-179296   | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC | 18 Sep 24 20:25 UTC |
	| addons  | enable dashboard -p                  | addons-287708          | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC |                     |
	|         | addons-287708                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-287708          | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC |                     |
	|         | addons-287708                        |                        |         |         |                     |                     |
	| start   | -p addons-287708 --wait=true         | addons-287708          | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC | 18 Sep 24 20:29 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:25:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:25:53.453757  880249 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:25:53.453895  880249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:25:53.453906  880249 out.go:358] Setting ErrFile to fd 2...
	I0918 20:25:53.453911  880249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:25:53.454144  880249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	I0918 20:25:53.454624  880249 out.go:352] Setting JSON to false
	I0918 20:25:53.455455  880249 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14902,"bootTime":1726676252,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0918 20:25:53.455528  880249 start.go:139] virtualization:  
	I0918 20:25:53.458004  880249 out.go:177] * [addons-287708] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0918 20:25:53.460673  880249 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:25:53.460797  880249 notify.go:220] Checking for updates...
	I0918 20:25:53.465138  880249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:25:53.467090  880249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 20:25:53.468895  880249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	I0918 20:25:53.470901  880249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 20:25:53.472895  880249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:25:53.475473  880249 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:25:53.500510  880249 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 20:25:53.500679  880249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:25:53.563808  880249 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-18 20:25:53.554011278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:25:53.563924  880249 docker.go:318] overlay module found
	I0918 20:25:53.566109  880249 out.go:177] * Using the docker driver based on user configuration
	I0918 20:25:53.568419  880249 start.go:297] selected driver: docker
	I0918 20:25:53.568435  880249 start.go:901] validating driver "docker" against <nil>
	I0918 20:25:53.568448  880249 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:25:53.569088  880249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:25:53.620167  880249 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-18 20:25:53.611251101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:25:53.620393  880249 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 20:25:53.620627  880249 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:25:53.622502  880249 out.go:177] * Using Docker driver with root privileges
	I0918 20:25:53.624411  880249 cni.go:84] Creating CNI manager for ""
	I0918 20:25:53.624483  880249 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0918 20:25:53.624503  880249 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 20:25:53.624587  880249 start.go:340] cluster config:
	{Name:addons-287708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-287708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:25:53.627025  880249 out.go:177] * Starting "addons-287708" primary control-plane node in "addons-287708" cluster
	I0918 20:25:53.628736  880249 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0918 20:25:53.630759  880249 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0918 20:25:53.632764  880249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0918 20:25:53.632827  880249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-874114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0918 20:25:53.632840  880249 cache.go:56] Caching tarball of preloaded images
	I0918 20:25:53.632838  880249 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0918 20:25:53.632922  880249 preload.go:172] Found /home/jenkins/minikube-integration/19667-874114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 20:25:53.632931  880249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0918 20:25:53.633309  880249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/config.json ...
	I0918 20:25:53.633464  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/config.json: {Name:mkaeea9f27e4e5e6843c8342e3b20c08016548f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:25:53.648875  880249 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 20:25:53.648999  880249 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 20:25:53.649025  880249 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0918 20:25:53.649032  880249 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0918 20:25:53.649039  880249 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0918 20:25:53.649045  880249 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0918 20:26:11.434147  880249 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0918 20:26:11.434185  880249 cache.go:194] Successfully downloaded all kic artifacts
	I0918 20:26:11.434229  880249 start.go:360] acquireMachinesLock for addons-287708: {Name:mk488060ba655420f30281506539486a4d89c4ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:26:11.434362  880249 start.go:364] duration metric: took 108.209µs to acquireMachinesLock for "addons-287708"
	I0918 20:26:11.434392  880249 start.go:93] Provisioning new machine with config: &{Name:addons-287708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-287708 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0918 20:26:11.434486  880249 start.go:125] createHost starting for "" (driver="docker")
	I0918 20:26:11.436764  880249 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0918 20:26:11.437004  880249 start.go:159] libmachine.API.Create for "addons-287708" (driver="docker")
	I0918 20:26:11.437041  880249 client.go:168] LocalClient.Create starting
	I0918 20:26:11.437158  880249 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem
	I0918 20:26:11.726207  880249 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/cert.pem
	I0918 20:26:12.087870  880249 cli_runner.go:164] Run: docker network inspect addons-287708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0918 20:26:12.105684  880249 cli_runner.go:211] docker network inspect addons-287708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0918 20:26:12.105786  880249 network_create.go:284] running [docker network inspect addons-287708] to gather additional debugging logs...
	I0918 20:26:12.105813  880249 cli_runner.go:164] Run: docker network inspect addons-287708
	W0918 20:26:12.122023  880249 cli_runner.go:211] docker network inspect addons-287708 returned with exit code 1
	I0918 20:26:12.122056  880249 network_create.go:287] error running [docker network inspect addons-287708]: docker network inspect addons-287708: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-287708 not found
	I0918 20:26:12.122075  880249 network_create.go:289] output of [docker network inspect addons-287708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-287708 not found
	
	** /stderr **
	I0918 20:26:12.122176  880249 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 20:26:12.138437  880249 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400175c9c0}
	I0918 20:26:12.138487  880249 network_create.go:124] attempt to create docker network addons-287708 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0918 20:26:12.138550  880249 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-287708 addons-287708
	I0918 20:26:12.206968  880249 network_create.go:108] docker network addons-287708 192.168.49.0/24 created
	I0918 20:26:12.207007  880249 kic.go:121] calculated static IP "192.168.49.2" for the "addons-287708" container
	I0918 20:26:12.207081  880249 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0918 20:26:12.226047  880249 cli_runner.go:164] Run: docker volume create addons-287708 --label name.minikube.sigs.k8s.io=addons-287708 --label created_by.minikube.sigs.k8s.io=true
	I0918 20:26:12.243846  880249 oci.go:103] Successfully created a docker volume addons-287708
	I0918 20:26:12.243948  880249 cli_runner.go:164] Run: docker run --rm --name addons-287708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-287708 --entrypoint /usr/bin/test -v addons-287708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0918 20:26:14.257218  880249 cli_runner.go:217] Completed: docker run --rm --name addons-287708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-287708 --entrypoint /usr/bin/test -v addons-287708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (2.013221748s)
	I0918 20:26:14.257251  880249 oci.go:107] Successfully prepared a docker volume addons-287708
	I0918 20:26:14.257277  880249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0918 20:26:14.257298  880249 kic.go:194] Starting extracting preloaded images to volume ...
	I0918 20:26:14.257380  880249 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19667-874114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-287708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0918 20:26:18.302988  880249 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19667-874114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-287708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.045559577s)
	I0918 20:26:18.303024  880249 kic.go:203] duration metric: took 4.045722917s to extract preloaded images to volume ...
	W0918 20:26:18.303176  880249 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0918 20:26:18.303298  880249 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0918 20:26:18.355158  880249 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-287708 --name addons-287708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-287708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-287708 --network addons-287708 --ip 192.168.49.2 --volume addons-287708:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0918 20:26:18.667795  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Running}}
	I0918 20:26:18.689237  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:18.713190  880249 cli_runner.go:164] Run: docker exec addons-287708 stat /var/lib/dpkg/alternatives/iptables
	I0918 20:26:18.785591  880249 oci.go:144] the created container "addons-287708" has a running status.
	I0918 20:26:18.785621  880249 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa...
	I0918 20:26:20.167542  880249 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0918 20:26:20.186885  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:20.203944  880249 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0918 20:26:20.203971  880249 kic_runner.go:114] Args: [docker exec --privileged addons-287708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0918 20:26:20.259639  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:20.279000  880249 machine.go:93] provisionDockerMachine start ...
	I0918 20:26:20.279098  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:20.297314  880249 main.go:141] libmachine: Using SSH client type: native
	I0918 20:26:20.297586  880249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33880 <nil> <nil>}
	I0918 20:26:20.297604  880249 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 20:26:20.443494  880249 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-287708
	
	I0918 20:26:20.443537  880249 ubuntu.go:169] provisioning hostname "addons-287708"
	I0918 20:26:20.443604  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:20.461477  880249 main.go:141] libmachine: Using SSH client type: native
	I0918 20:26:20.461736  880249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33880 <nil> <nil>}
	I0918 20:26:20.461754  880249 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-287708 && echo "addons-287708" | sudo tee /etc/hostname
	I0918 20:26:20.619492  880249 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-287708
	
	I0918 20:26:20.619575  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:20.635937  880249 main.go:141] libmachine: Using SSH client type: native
	I0918 20:26:20.636237  880249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33880 <nil> <nil>}
	I0918 20:26:20.636261  880249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-287708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-287708/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-287708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:26:20.780552  880249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:26:20.780620  880249 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19667-874114/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-874114/.minikube}
	I0918 20:26:20.780672  880249 ubuntu.go:177] setting up certificates
	I0918 20:26:20.780712  880249 provision.go:84] configureAuth start
	I0918 20:26:20.780840  880249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-287708
	I0918 20:26:20.797294  880249 provision.go:143] copyHostCerts
	I0918 20:26:20.797384  880249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-874114/.minikube/ca.pem (1082 bytes)
	I0918 20:26:20.797522  880249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-874114/.minikube/cert.pem (1123 bytes)
	I0918 20:26:20.797591  880249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-874114/.minikube/key.pem (1679 bytes)
	I0918 20:26:20.797652  880249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-874114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca-key.pem org=jenkins.addons-287708 san=[127.0.0.1 192.168.49.2 addons-287708 localhost minikube]
	I0918 20:26:21.016682  880249 provision.go:177] copyRemoteCerts
	I0918 20:26:21.016754  880249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:26:21.016813  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:21.034444  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:21.137392  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:26:21.162420  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 20:26:21.187143  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 20:26:21.211877  880249 provision.go:87] duration metric: took 431.134354ms to configureAuth
	I0918 20:26:21.211903  880249 ubuntu.go:193] setting minikube options for container-runtime
	I0918 20:26:21.212126  880249 config.go:182] Loaded profile config "addons-287708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 20:26:21.212140  880249 machine.go:96] duration metric: took 933.119062ms to provisionDockerMachine
	I0918 20:26:21.212147  880249 client.go:171] duration metric: took 9.775095954s to LocalClient.Create
	I0918 20:26:21.212166  880249 start.go:167] duration metric: took 9.775163548s to libmachine.API.Create "addons-287708"
	I0918 20:26:21.212177  880249 start.go:293] postStartSetup for "addons-287708" (driver="docker")
	I0918 20:26:21.212199  880249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:26:21.212260  880249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:26:21.212306  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:21.229205  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:21.329315  880249 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:26:21.332452  880249 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 20:26:21.332490  880249 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 20:26:21.332502  880249 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 20:26:21.332511  880249 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0918 20:26:21.332522  880249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-874114/.minikube/addons for local assets ...
	I0918 20:26:21.332590  880249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-874114/.minikube/files for local assets ...
	I0918 20:26:21.332619  880249 start.go:296] duration metric: took 120.435552ms for postStartSetup
	I0918 20:26:21.332952  880249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-287708
	I0918 20:26:21.349618  880249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/config.json ...
	I0918 20:26:21.349921  880249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:26:21.349990  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:21.366564  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:21.464732  880249 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 20:26:21.469030  880249 start.go:128] duration metric: took 10.034526657s to createHost
	I0918 20:26:21.469056  880249 start.go:83] releasing machines lock for "addons-287708", held for 10.034682324s
	I0918 20:26:21.469127  880249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-287708
	I0918 20:26:21.485510  880249 ssh_runner.go:195] Run: cat /version.json
	I0918 20:26:21.485544  880249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:26:21.485564  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:21.485615  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:21.504892  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:21.518926  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:21.728298  880249 ssh_runner.go:195] Run: systemctl --version
	I0918 20:26:21.732724  880249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 20:26:21.737149  880249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0918 20:26:21.762545  880249 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0918 20:26:21.762691  880249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:26:21.793253  880249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0918 20:26:21.793281  880249 start.go:495] detecting cgroup driver to use...
	I0918 20:26:21.793336  880249 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 20:26:21.793402  880249 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 20:26:21.805955  880249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 20:26:21.817417  880249 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:26:21.817483  880249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:26:21.835300  880249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:26:21.850393  880249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:26:21.936340  880249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:26:22.029570  880249 docker.go:233] disabling docker service ...
	I0918 20:26:22.029686  880249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:26:22.049217  880249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:26:22.061094  880249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:26:22.148228  880249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:26:22.238329  880249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:26:22.250103  880249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:26:22.268724  880249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0918 20:26:22.278915  880249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 20:26:22.288688  880249 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 20:26:22.288758  880249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 20:26:22.298443  880249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 20:26:22.307787  880249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 20:26:22.317281  880249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 20:26:22.327070  880249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:26:22.336138  880249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 20:26:22.346081  880249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0918 20:26:22.355881  880249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0918 20:26:22.365470  880249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:26:22.373963  880249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:26:22.382150  880249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:26:22.461808  880249 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 20:26:22.583163  880249 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0918 20:26:22.583318  880249 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0918 20:26:22.586860  880249 start.go:563] Will wait 60s for crictl version
	I0918 20:26:22.586972  880249 ssh_runner.go:195] Run: which crictl
	I0918 20:26:22.590466  880249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:26:22.625215  880249 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0918 20:26:22.625362  880249 ssh_runner.go:195] Run: containerd --version
	I0918 20:26:22.647779  880249 ssh_runner.go:195] Run: containerd --version
	I0918 20:26:22.672266  880249 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0918 20:26:22.674013  880249 cli_runner.go:164] Run: docker network inspect addons-287708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 20:26:22.689234  880249 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0918 20:26:22.692774  880249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:26:22.703144  880249 kubeadm.go:883] updating cluster {Name:addons-287708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-287708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:26:22.703267  880249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0918 20:26:22.703329  880249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:26:22.740403  880249 containerd.go:627] all images are preloaded for containerd runtime.
	I0918 20:26:22.740426  880249 containerd.go:534] Images already preloaded, skipping extraction
	I0918 20:26:22.740492  880249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:26:22.775838  880249 containerd.go:627] all images are preloaded for containerd runtime.
	I0918 20:26:22.775864  880249 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:26:22.775872  880249 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0918 20:26:22.775981  880249 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-287708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-287708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:26:22.776059  880249 ssh_runner.go:195] Run: sudo crictl info
	I0918 20:26:22.813105  880249 cni.go:84] Creating CNI manager for ""
	I0918 20:26:22.813131  880249 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0918 20:26:22.813141  880249 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:26:22.813165  880249 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-287708 NodeName:addons-287708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:26:22.813305  880249 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-287708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:26:22.813378  880249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:26:22.822335  880249 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:26:22.822413  880249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 20:26:22.831298  880249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0918 20:26:22.849456  880249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:26:22.867859  880249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0918 20:26:22.886607  880249 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0918 20:26:22.890028  880249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:26:22.900854  880249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:26:22.987596  880249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:26:23.004004  880249 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708 for IP: 192.168.49.2
	I0918 20:26:23.004027  880249 certs.go:194] generating shared ca certs ...
	I0918 20:26:23.004058  880249 certs.go:226] acquiring lock for ca certs: {Name:mk4a2e50bce1acd2df63eb42e5a33734237a5b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:23.004671  880249 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-874114/.minikube/ca.key
	I0918 20:26:23.920772  880249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-874114/.minikube/ca.crt ...
	I0918 20:26:23.920804  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/.minikube/ca.crt: {Name:mk7e71297131d48085c38346cc62a8dc5635d917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:23.921348  880249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-874114/.minikube/ca.key ...
	I0918 20:26:23.921366  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/.minikube/ca.key: {Name:mkde38ea5d67366679edccfdc360e44aae847df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:23.921764  880249 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.key
	I0918 20:26:24.197582  880249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.crt ...
	I0918 20:26:24.197614  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.crt: {Name:mk668812704ee6c2f50f2b86b928589a5cf43353 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:24.197802  880249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.key ...
	I0918 20:26:24.197816  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.key: {Name:mkb676224236f6912afb1a4eda30527a0cb652bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:24.197905  880249 certs.go:256] generating profile certs ...
	I0918 20:26:24.197963  880249 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.key
	I0918 20:26:24.197981  880249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt with IP's: []
	I0918 20:26:24.889839  880249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt ...
	I0918 20:26:24.889871  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: {Name:mk67fdf6a2e69ec226d272c65946a376d02982e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:24.890500  880249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.key ...
	I0918 20:26:24.890519  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.key: {Name:mk48e4a8b924ad8895bb476cc986f13f76a3cc30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:24.890610  880249 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.key.9d4a2ea1
	I0918 20:26:24.890635  880249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.crt.9d4a2ea1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0918 20:26:25.649439  880249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.crt.9d4a2ea1 ...
	I0918 20:26:25.649476  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.crt.9d4a2ea1: {Name:mk56127a88155d58f0f612dca35de0cba6dfddf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:25.650034  880249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.key.9d4a2ea1 ...
	I0918 20:26:25.650053  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.key.9d4a2ea1: {Name:mk0065687dba4e11827cd3cc9129fc1fa7a9776c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:25.650594  880249 certs.go:381] copying /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.crt.9d4a2ea1 -> /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.crt
	I0918 20:26:25.650695  880249 certs.go:385] copying /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.key.9d4a2ea1 -> /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.key
	I0918 20:26:25.650762  880249 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/proxy-client.key
	I0918 20:26:25.650782  880249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/proxy-client.crt with IP's: []
	I0918 20:26:26.259256  880249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/proxy-client.crt ...
	I0918 20:26:26.259293  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/proxy-client.crt: {Name:mkbbe569b52e3949bcfba1df7e8ded3506af95c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:26.259946  880249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/proxy-client.key ...
	I0918 20:26:26.259963  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/proxy-client.key: {Name:mka52d0a06287a4bd4acc145f0eed2a33ffd0490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:26.260559  880249 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 20:26:26.260606  880249 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:26:26.260631  880249 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:26:26.260659  880249 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/key.pem (1679 bytes)
	I0918 20:26:26.261237  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:26:26.285818  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 20:26:26.310040  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:26:26.334123  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:26:26.358548  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0918 20:26:26.382323  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 20:26:26.406543  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:26:26.431165  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 20:26:26.454579  880249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:26:26.478566  880249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:26:26.496740  880249 ssh_runner.go:195] Run: openssl version
	I0918 20:26:26.502131  880249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:26:26.511751  880249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:26:26.515316  880249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 20:26 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:26:26.515381  880249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:26:26.522461  880249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:26:26.531501  880249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:26:26.534637  880249 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:26:26.534694  880249 kubeadm.go:392] StartCluster: {Name:addons-287708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-287708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:26:26.534784  880249 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0918 20:26:26.534842  880249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:26:26.576526  880249 cri.go:89] found id: ""
	I0918 20:26:26.576637  880249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 20:26:26.585482  880249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 20:26:26.594379  880249 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0918 20:26:26.594444  880249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 20:26:26.603212  880249 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 20:26:26.603232  880249 kubeadm.go:157] found existing configuration files:
	
	I0918 20:26:26.603287  880249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 20:26:26.612006  880249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 20:26:26.612155  880249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 20:26:26.620896  880249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 20:26:26.629768  880249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 20:26:26.629834  880249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 20:26:26.638333  880249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 20:26:26.647232  880249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 20:26:26.647350  880249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 20:26:26.655905  880249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 20:26:26.664931  880249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 20:26:26.665020  880249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 20:26:26.673584  880249 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0918 20:26:26.711648  880249 kubeadm.go:310] W0918 20:26:26.710943    1028 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 20:26:26.713031  880249 kubeadm.go:310] W0918 20:26:26.712445    1028 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 20:26:26.747499  880249 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0918 20:26:26.817834  880249 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 20:26:44.556182  880249 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 20:26:44.556248  880249 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 20:26:44.556342  880249 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0918 20:26:44.556402  880249 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0918 20:26:44.556442  880249 kubeadm.go:310] OS: Linux
	I0918 20:26:44.556492  880249 kubeadm.go:310] CGROUPS_CPU: enabled
	I0918 20:26:44.556547  880249 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0918 20:26:44.556598  880249 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0918 20:26:44.556648  880249 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0918 20:26:44.556701  880249 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0918 20:26:44.556751  880249 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0918 20:26:44.556799  880249 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0918 20:26:44.556850  880249 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0918 20:26:44.556902  880249 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0918 20:26:44.556976  880249 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 20:26:44.557071  880249 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 20:26:44.557162  880249 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 20:26:44.557227  880249 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 20:26:44.559412  880249 out.go:235]   - Generating certificates and keys ...
	I0918 20:26:44.559522  880249 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 20:26:44.559621  880249 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 20:26:44.559707  880249 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 20:26:44.559775  880249 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 20:26:44.559856  880249 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 20:26:44.559949  880249 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 20:26:44.560032  880249 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 20:26:44.560186  880249 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-287708 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 20:26:44.560245  880249 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 20:26:44.560368  880249 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-287708 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 20:26:44.560437  880249 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 20:26:44.560504  880249 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 20:26:44.560552  880249 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 20:26:44.560611  880249 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 20:26:44.560661  880249 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 20:26:44.560716  880249 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 20:26:44.560773  880249 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 20:26:44.560835  880249 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 20:26:44.560888  880249 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 20:26:44.560968  880249 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 20:26:44.561034  880249 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 20:26:44.563958  880249 out.go:235]   - Booting up control plane ...
	I0918 20:26:44.564127  880249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 20:26:44.564218  880249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 20:26:44.564291  880249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 20:26:44.564396  880249 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 20:26:44.564501  880249 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 20:26:44.564543  880249 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 20:26:44.564675  880249 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 20:26:44.564787  880249 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 20:26:44.564849  880249 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.501406458s
	I0918 20:26:44.564921  880249 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 20:26:44.564981  880249 kubeadm.go:310] [api-check] The API server is healthy after 6.502935524s
	I0918 20:26:44.565088  880249 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 20:26:44.565213  880249 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 20:26:44.565273  880249 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 20:26:44.565451  880249 kubeadm.go:310] [mark-control-plane] Marking the node addons-287708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 20:26:44.565509  880249 kubeadm.go:310] [bootstrap-token] Using token: dsan5f.og07vk53pp1kb4nd
	I0918 20:26:44.567520  880249 out.go:235]   - Configuring RBAC rules ...
	I0918 20:26:44.567640  880249 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 20:26:44.567754  880249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 20:26:44.567932  880249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 20:26:44.568159  880249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 20:26:44.568279  880249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 20:26:44.568460  880249 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 20:26:44.568584  880249 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 20:26:44.568631  880249 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 20:26:44.568684  880249 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 20:26:44.568694  880249 kubeadm.go:310] 
	I0918 20:26:44.568753  880249 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 20:26:44.568761  880249 kubeadm.go:310] 
	I0918 20:26:44.568837  880249 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 20:26:44.568844  880249 kubeadm.go:310] 
	I0918 20:26:44.568870  880249 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 20:26:44.568931  880249 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 20:26:44.568996  880249 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 20:26:44.569004  880249 kubeadm.go:310] 
	I0918 20:26:44.569057  880249 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 20:26:44.569066  880249 kubeadm.go:310] 
	I0918 20:26:44.569112  880249 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 20:26:44.569120  880249 kubeadm.go:310] 
	I0918 20:26:44.569170  880249 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 20:26:44.569246  880249 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 20:26:44.569316  880249 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 20:26:44.569324  880249 kubeadm.go:310] 
	I0918 20:26:44.569407  880249 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 20:26:44.569484  880249 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 20:26:44.569492  880249 kubeadm.go:310] 
	I0918 20:26:44.569575  880249 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dsan5f.og07vk53pp1kb4nd \
	I0918 20:26:44.569678  880249 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dfe692def218a3b54a431dc3e568be87c408813330b5ec95924684cacf793979 \
	I0918 20:26:44.569701  880249 kubeadm.go:310] 	--control-plane 
	I0918 20:26:44.569709  880249 kubeadm.go:310] 
	I0918 20:26:44.569793  880249 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 20:26:44.569802  880249 kubeadm.go:310] 
	I0918 20:26:44.569883  880249 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dsan5f.og07vk53pp1kb4nd \
	I0918 20:26:44.569985  880249 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dfe692def218a3b54a431dc3e568be87c408813330b5ec95924684cacf793979 
	I0918 20:26:44.570012  880249 cni.go:84] Creating CNI manager for ""
	I0918 20:26:44.570047  880249 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0918 20:26:44.572464  880249 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0918 20:26:44.574465  880249 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0918 20:26:44.578367  880249 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0918 20:26:44.578445  880249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0918 20:26:44.597658  880249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0918 20:26:44.872845  880249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 20:26:44.873034  880249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:26:44.873133  880249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-287708 minikube.k8s.io/updated_at=2024_09_18T20_26_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=addons-287708 minikube.k8s.io/primary=true
	I0918 20:26:45.071963  880249 ops.go:34] apiserver oom_adj: -16
	I0918 20:26:45.072224  880249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:26:45.572289  880249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:26:46.072583  880249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:26:46.572258  880249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:26:47.072290  880249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:26:47.572836  880249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:26:48.072328  880249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:26:48.572450  880249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:26:48.673471  880249 kubeadm.go:1113] duration metric: took 3.800488332s to wait for elevateKubeSystemPrivileges
	I0918 20:26:48.673508  880249 kubeadm.go:394] duration metric: took 22.138816902s to StartCluster
	I0918 20:26:48.673526  880249 settings.go:142] acquiring lock: {Name:mk57bc44f9fec4b4923bac0bde72e24bb39c4097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:48.673646  880249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 20:26:48.674062  880249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/kubeconfig: {Name:mke33cc40bb5f82b15bbe41884ab27179b9ca37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:26:48.674697  880249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 20:26:48.674730  880249 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0918 20:26:48.674954  880249 config.go:182] Loaded profile config "addons-287708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 20:26:48.674999  880249 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0918 20:26:48.675074  880249 addons.go:69] Setting yakd=true in profile "addons-287708"
	I0918 20:26:48.675088  880249 addons.go:234] Setting addon yakd=true in "addons-287708"
	I0918 20:26:48.675111  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.675140  880249 addons.go:69] Setting inspektor-gadget=true in profile "addons-287708"
	I0918 20:26:48.675157  880249 addons.go:234] Setting addon inspektor-gadget=true in "addons-287708"
	I0918 20:26:48.675179  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.675566  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.675712  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.676215  880249 addons.go:69] Setting metrics-server=true in profile "addons-287708"
	I0918 20:26:48.676244  880249 addons.go:234] Setting addon metrics-server=true in "addons-287708"
	I0918 20:26:48.676271  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.676706  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.680400  880249 addons.go:69] Setting cloud-spanner=true in profile "addons-287708"
	I0918 20:26:48.680516  880249 addons.go:234] Setting addon cloud-spanner=true in "addons-287708"
	I0918 20:26:48.680607  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.681336  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.682103  880249 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-287708"
	I0918 20:26:48.682774  880249 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-287708"
	I0918 20:26:48.682809  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.688670  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.682116  880249 addons.go:69] Setting default-storageclass=true in profile "addons-287708"
	I0918 20:26:48.691174  880249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-287708"
	I0918 20:26:48.691487  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.682121  880249 addons.go:69] Setting gcp-auth=true in profile "addons-287708"
	I0918 20:26:48.704120  880249 mustload.go:65] Loading cluster: addons-287708
	I0918 20:26:48.704389  880249 config.go:182] Loaded profile config "addons-287708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 20:26:48.704709  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.723830  880249 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0918 20:26:48.725668  880249 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0918 20:26:48.725692  880249 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0918 20:26:48.725771  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:48.682126  880249 addons.go:69] Setting ingress=true in profile "addons-287708"
	I0918 20:26:48.736803  880249 addons.go:234] Setting addon ingress=true in "addons-287708"
	I0918 20:26:48.737989  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.738473  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.682129  880249 addons.go:69] Setting ingress-dns=true in profile "addons-287708"
	I0918 20:26:48.742186  880249 addons.go:234] Setting addon ingress-dns=true in "addons-287708"
	I0918 20:26:48.742257  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.682196  880249 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-287708"
	I0918 20:26:48.747768  880249 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-287708"
	I0918 20:26:48.747833  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.748431  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.682200  880249 addons.go:69] Setting registry=true in profile "addons-287708"
	I0918 20:26:48.682204  880249 addons.go:69] Setting storage-provisioner=true in profile "addons-287708"
	I0918 20:26:48.682208  880249 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-287708"
	I0918 20:26:48.682215  880249 addons.go:69] Setting volcano=true in profile "addons-287708"
	I0918 20:26:48.682218  880249 addons.go:69] Setting volumesnapshots=true in profile "addons-287708"
	I0918 20:26:48.682635  880249 out.go:177] * Verifying Kubernetes components...
	I0918 20:26:48.758970  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.762589  880249 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-287708"
	I0918 20:26:48.762998  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.764842  880249 addons.go:234] Setting addon volcano=true in "addons-287708"
	I0918 20:26:48.764896  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.779881  880249 addons.go:234] Setting addon volumesnapshots=true in "addons-287708"
	I0918 20:26:48.779961  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.780485  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.791999  880249 addons.go:234] Setting addon registry=true in "addons-287708"
	I0918 20:26:48.792062  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.792603  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.808166  880249 addons.go:234] Setting addon storage-provisioner=true in "addons-287708"
	I0918 20:26:48.808224  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.810896  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.813401  880249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:26:48.830498  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.836987  880249 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0918 20:26:48.842118  880249 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0918 20:26:48.844567  880249 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 20:26:48.844718  880249 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 20:26:48.844748  880249 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 20:26:48.844866  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:48.854060  880249 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 20:26:48.854086  880249 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 20:26:48.854160  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:48.874599  880249 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0918 20:26:48.878283  880249 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0918 20:26:48.878303  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0918 20:26:48.878367  880249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 20:26:48.878749  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:48.880747  880249 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 20:26:48.883741  880249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 20:26:48.885341  880249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 20:26:48.887194  880249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 20:26:48.889019  880249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 20:26:48.890857  880249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 20:26:48.892805  880249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 20:26:48.893461  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.900568  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:48.901304  880249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 20:26:48.901321  880249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 20:26:48.901385  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:48.901657  880249 addons.go:234] Setting addon default-storageclass=true in "addons-287708"
	I0918 20:26:48.901720  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.902255  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.905577  880249 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0918 20:26:48.909112  880249 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 20:26:48.909135  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0918 20:26:48.909192  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:48.974806  880249 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-287708"
	I0918 20:26:48.974895  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:48.975356  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:48.998265  880249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0918 20:26:48.998364  880249 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0918 20:26:49.001808  880249 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0918 20:26:49.009404  880249 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0918 20:26:49.009456  880249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 20:26:49.013465  880249 out.go:177]   - Using image docker.io/registry:2.8.3
	I0918 20:26:49.013620  880249 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 20:26:49.020989  880249 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0918 20:26:49.021880  880249 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 20:26:49.021904  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0918 20:26:49.021970  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:49.041652  880249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 20:26:49.041679  880249 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 20:26:49.041777  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:49.059929  880249 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0918 20:26:49.059951  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0918 20:26:49.060027  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:49.060279  880249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 20:26:49.065084  880249 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 20:26:49.065110  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0918 20:26:49.065202  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:49.087742  880249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:26:49.093623  880249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:26:49.093694  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 20:26:49.093794  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:49.117280  880249 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0918 20:26:49.124429  880249 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 20:26:49.124456  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0918 20:26:49.124525  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:49.147469  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.195507  880249 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 20:26:49.195587  880249 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 20:26:49.195661  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:49.212209  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.213417  880249 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0918 20:26:49.218340  880249 out.go:177]   - Using image docker.io/busybox:stable
	I0918 20:26:49.220581  880249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 20:26:49.220605  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0918 20:26:49.220676  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:49.237084  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.239671  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.240808  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.253906  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.283977  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.293925  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.306462  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.307190  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.316955  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.331931  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	W0918 20:26:49.334673  880249 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0918 20:26:49.334708  880249 retry.go:31] will retry after 275.666685ms: ssh: handshake failed: EOF
	I0918 20:26:49.341938  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:49.417612  880249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:26:49.619219  880249 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0918 20:26:49.619256  880249 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0918 20:26:49.827623  880249 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0918 20:26:49.827661  880249 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0918 20:26:49.834880  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 20:26:49.846921  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0918 20:26:49.860656  880249 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 20:26:49.860695  880249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 20:26:49.903243  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 20:26:49.932645  880249 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0918 20:26:49.932674  880249 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0918 20:26:49.943203  880249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 20:26:49.943239  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 20:26:49.962539  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 20:26:49.966308  880249 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0918 20:26:49.966348  880249 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0918 20:26:50.029947  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 20:26:50.056108  880249 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 20:26:50.056135  880249 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 20:26:50.063972  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:26:50.104504  880249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 20:26:50.104535  880249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 20:26:50.132776  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 20:26:50.199643  880249 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0918 20:26:50.199669  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0918 20:26:50.230356  880249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 20:26:50.230384  880249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 20:26:50.255630  880249 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0918 20:26:50.255666  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0918 20:26:50.271003  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 20:26:50.337851  880249 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 20:26:50.337884  880249 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 20:26:50.449535  880249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 20:26:50.449576  880249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 20:26:50.465852  880249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 20:26:50.465880  880249 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 20:26:50.534199  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0918 20:26:50.682552  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0918 20:26:50.727526  880249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 20:26:50.727569  880249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 20:26:50.728686  880249 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 20:26:50.728720  880249 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 20:26:50.929852  880249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 20:26:50.929891  880249 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 20:26:50.944114  880249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 20:26:50.944160  880249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 20:26:51.034337  880249 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.616692228s)
	I0918 20:26:51.034687  880249 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.156292486s)
	I0918 20:26:51.034709  880249 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0918 20:26:51.035761  880249 node_ready.go:35] waiting up to 6m0s for node "addons-287708" to be "Ready" ...
	I0918 20:26:51.083447  880249 node_ready.go:49] node "addons-287708" has status "Ready":"True"
	I0918 20:26:51.083487  880249 node_ready.go:38] duration metric: took 47.696045ms for node "addons-287708" to be "Ready" ...
	I0918 20:26:51.083499  880249 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:26:51.088790  880249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 20:26:51.088820  880249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 20:26:51.124867  880249 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kgm56" in "kube-system" namespace to be "Ready" ...
	I0918 20:26:51.179960  880249 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 20:26:51.180038  880249 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 20:26:51.390052  880249 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 20:26:51.390132  880249 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 20:26:51.478007  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 20:26:51.511809  880249 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 20:26:51.511886  880249 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 20:26:51.524863  880249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 20:26:51.524932  880249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 20:26:51.538850  880249 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-287708" context rescaled to 1 replicas
	I0918 20:26:51.627857  880249 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 20:26:51.627928  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 20:26:51.749616  880249 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 20:26:51.749693  880249 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 20:26:51.760813  880249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 20:26:51.760880  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 20:26:51.919545  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 20:26:51.989448  880249 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 20:26:51.989529  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0918 20:26:52.042821  880249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 20:26:52.042897  880249 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 20:26:52.085537  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 20:26:52.412175  880249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 20:26:52.412242  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 20:26:52.886462  880249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 20:26:52.886528  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 20:26:53.144877  880249 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-kgm56" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-kgm56" not found
	I0918 20:26:53.144959  880249 pod_ready.go:82] duration metric: took 2.020056097s for pod "coredns-7c65d6cfc9-kgm56" in "kube-system" namespace to be "Ready" ...
	E0918 20:26:53.144985  880249 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-kgm56" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-kgm56" not found
	I0918 20:26:53.145020  880249 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xrmwn" in "kube-system" namespace to be "Ready" ...
	I0918 20:26:53.305817  880249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 20:26:53.305892  880249 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 20:26:53.639004  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 20:26:55.187982  880249 pod_ready.go:103] pod "coredns-7c65d6cfc9-xrmwn" in "kube-system" namespace has status "Ready":"False"
	I0918 20:26:56.104228  880249 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 20:26:56.104377  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:56.132319  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:56.833404  880249 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 20:26:56.842740  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.007821229s)
	I0918 20:26:56.842951  880249 addons.go:475] Verifying addon ingress=true in "addons-287708"
	I0918 20:26:56.842894  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.939626406s)
	I0918 20:26:56.842923  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.995898276s)
	I0918 20:26:56.845054  880249 out.go:177] * Verifying ingress addon...
	I0918 20:26:56.847797  880249 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 20:26:56.852711  880249 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 20:26:56.852736  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:26:56.968377  880249 addons.go:234] Setting addon gcp-auth=true in "addons-287708"
	I0918 20:26:56.968427  880249 host.go:66] Checking if "addons-287708" exists ...
	I0918 20:26:56.968911  880249 cli_runner.go:164] Run: docker container inspect addons-287708 --format={{.State.Status}}
	I0918 20:26:56.997082  880249 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 20:26:56.997138  880249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-287708
	I0918 20:26:57.033462  880249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/addons-287708/id_rsa Username:docker}
	I0918 20:26:57.361318  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:26:57.748353  880249 pod_ready.go:103] pod "coredns-7c65d6cfc9-xrmwn" in "kube-system" namespace has status "Ready":"False"
	I0918 20:26:57.879602  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:26:58.390721  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:26:58.708803  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.746222694s)
	I0918 20:26:58.708877  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.678896865s)
	I0918 20:26:58.709084  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.645082891s)
	I0918 20:26:58.709145  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.576344339s)
	I0918 20:26:58.709258  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.438233712s)
	I0918 20:26:58.709302  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.175070447s)
	I0918 20:26:58.709318  880249 addons.go:475] Verifying addon registry=true in "addons-287708"
	I0918 20:26:58.709478  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.026894734s)
	I0918 20:26:58.709723  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.231632711s)
	I0918 20:26:58.709745  880249 addons.go:475] Verifying addon metrics-server=true in "addons-287708"
	I0918 20:26:58.709847  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.790224859s)
	W0918 20:26:58.709876  880249 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 20:26:58.709893  880249 retry.go:31] will retry after 142.989134ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 20:26:58.709965  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.624351609s)
	I0918 20:26:58.712186  880249 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-287708 service yakd-dashboard -n yakd-dashboard
	
	I0918 20:26:58.712321  880249 out.go:177] * Verifying registry addon...
	I0918 20:26:58.715600  880249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 20:26:58.739030  880249 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 20:26:58.739060  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0918 20:26:58.763687  880249 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0918 20:26:58.853537  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 20:26:58.906942  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:26:59.224253  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:26:59.352725  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:26:59.688486  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.049386139s)
	I0918 20:26:59.688581  880249 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-287708"
	I0918 20:26:59.688777  880249 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.69167392s)
	I0918 20:26:59.691586  880249 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 20:26:59.691743  880249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 20:26:59.694762  880249 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0918 20:26:59.695266  880249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 20:26:59.697503  880249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 20:26:59.697630  880249 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 20:26:59.703672  880249 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 20:26:59.703696  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:26:59.719858  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:26:59.759165  880249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 20:26:59.759242  880249 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 20:26:59.853146  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:26:59.858317  880249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 20:26:59.858391  880249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0918 20:26:59.945354  880249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 20:27:00.202870  880249 pod_ready.go:103] pod "coredns-7c65d6cfc9-xrmwn" in "kube-system" namespace has status "Ready":"False"
	I0918 20:27:00.207860  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:00.224683  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:00.388478  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:00.701323  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:00.720308  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:00.853177  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:00.910736  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.057145558s)
	I0918 20:27:01.141901  880249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.196439229s)
	I0918 20:27:01.144826  880249 addons.go:475] Verifying addon gcp-auth=true in "addons-287708"
	I0918 20:27:01.147316  880249 out.go:177] * Verifying gcp-auth addon...
	I0918 20:27:01.150844  880249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 20:27:01.154472  880249 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 20:27:01.257788  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:01.258750  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:01.357643  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:01.701008  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:01.721268  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:01.853301  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:02.258674  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:02.260344  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:02.353194  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:02.655547  880249 pod_ready.go:103] pod "coredns-7c65d6cfc9-xrmwn" in "kube-system" namespace has status "Ready":"False"
	I0918 20:27:02.700691  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:02.759840  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:02.860760  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:03.201928  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:03.220310  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:03.356923  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:03.702141  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:03.720424  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:03.852916  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:04.201269  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:04.220141  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:04.352611  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:04.699997  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:04.720251  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:04.852553  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:05.169329  880249 pod_ready.go:93] pod "coredns-7c65d6cfc9-xrmwn" in "kube-system" namespace has status "Ready":"True"
	I0918 20:27:05.169404  880249 pod_ready.go:82] duration metric: took 12.024356965s for pod "coredns-7c65d6cfc9-xrmwn" in "kube-system" namespace to be "Ready" ...
	I0918 20:27:05.169438  880249 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-287708" in "kube-system" namespace to be "Ready" ...
	I0918 20:27:05.189242  880249 pod_ready.go:93] pod "etcd-addons-287708" in "kube-system" namespace has status "Ready":"True"
	I0918 20:27:05.189315  880249 pod_ready.go:82] duration metric: took 19.852199ms for pod "etcd-addons-287708" in "kube-system" namespace to be "Ready" ...
	I0918 20:27:05.189346  880249 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-287708" in "kube-system" namespace to be "Ready" ...
	I0918 20:27:05.197160  880249 pod_ready.go:93] pod "kube-apiserver-addons-287708" in "kube-system" namespace has status "Ready":"True"
	I0918 20:27:05.197235  880249 pod_ready.go:82] duration metric: took 7.857066ms for pod "kube-apiserver-addons-287708" in "kube-system" namespace to be "Ready" ...
	I0918 20:27:05.197262  880249 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-287708" in "kube-system" namespace to be "Ready" ...
	I0918 20:27:05.204403  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:05.208398  880249 pod_ready.go:93] pod "kube-controller-manager-addons-287708" in "kube-system" namespace has status "Ready":"True"
	I0918 20:27:05.208423  880249 pod_ready.go:82] duration metric: took 11.139274ms for pod "kube-controller-manager-addons-287708" in "kube-system" namespace to be "Ready" ...
	I0918 20:27:05.208438  880249 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ts49q" in "kube-system" namespace to be "Ready" ...
	I0918 20:27:05.214565  880249 pod_ready.go:93] pod "kube-proxy-ts49q" in "kube-system" namespace has status "Ready":"True"
	I0918 20:27:05.214591  880249 pod_ready.go:82] duration metric: took 6.145472ms for pod "kube-proxy-ts49q" in "kube-system" namespace to be "Ready" ...
	I0918 20:27:05.214604  880249 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-287708" in "kube-system" namespace to be "Ready" ...
	I0918 20:27:05.219364  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:05.352840  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:05.549218  880249 pod_ready.go:93] pod "kube-scheduler-addons-287708" in "kube-system" namespace has status "Ready":"True"
	I0918 20:27:05.549245  880249 pod_ready.go:82] duration metric: took 334.632299ms for pod "kube-scheduler-addons-287708" in "kube-system" namespace to be "Ready" ...
	I0918 20:27:05.549257  880249 pod_ready.go:39] duration metric: took 14.465736719s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:27:05.549278  880249 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:27:05.549343  880249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:27:05.565420  880249 api_server.go:72] duration metric: took 16.890656875s to wait for apiserver process to appear ...
	I0918 20:27:05.565450  880249 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:27:05.565473  880249 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0918 20:27:05.573753  880249 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0918 20:27:05.574959  880249 api_server.go:141] control plane version: v1.31.1
	I0918 20:27:05.574986  880249 api_server.go:131] duration metric: took 9.529062ms to wait for apiserver health ...
	I0918 20:27:05.574995  880249 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 20:27:05.700841  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:05.719532  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:05.757185  880249 system_pods.go:59] 18 kube-system pods found
	I0918 20:27:05.757231  880249 system_pods.go:61] "coredns-7c65d6cfc9-xrmwn" [d1c6d55f-c226-499e-861f-f0c2ff306c58] Running
	I0918 20:27:05.757241  880249 system_pods.go:61] "csi-hostpath-attacher-0" [3fbcf01c-8d95-4804-944c-f5952413e276] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 20:27:05.757249  880249 system_pods.go:61] "csi-hostpath-resizer-0" [caf2cac7-c893-4107-a0e4-b2c93dc1b87c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 20:27:05.757283  880249 system_pods.go:61] "csi-hostpathplugin-rtdl5" [15765f85-3701-490a-8246-dd8c5f9018c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 20:27:05.757289  880249 system_pods.go:61] "etcd-addons-287708" [833077c7-7a2b-47f6-ab92-62f138f80a88] Running
	I0918 20:27:05.757301  880249 system_pods.go:61] "kindnet-hrdvw" [5699511f-5a3a-43dc-a6fc-e440291f9f36] Running
	I0918 20:27:05.757306  880249 system_pods.go:61] "kube-apiserver-addons-287708" [842a2cc0-e894-4f2c-bd8d-0d403c026e5f] Running
	I0918 20:27:05.757311  880249 system_pods.go:61] "kube-controller-manager-addons-287708" [14f73082-5d24-4a68-8b51-0cc718d48c89] Running
	I0918 20:27:05.757322  880249 system_pods.go:61] "kube-ingress-dns-minikube" [262532ac-69f8-4733-a88a-5dcadd8377a3] Running
	I0918 20:27:05.757326  880249 system_pods.go:61] "kube-proxy-ts49q" [2cabee12-188d-4a79-b496-dc5d40d56ff7] Running
	I0918 20:27:05.757330  880249 system_pods.go:61] "kube-scheduler-addons-287708" [9dc0d55b-c557-4de8-874d-cf6510fd2762] Running
	I0918 20:27:05.757350  880249 system_pods.go:61] "metrics-server-84c5f94fbc-8x6cq" [5cf58f18-a262-4368-a7fc-d111916eb6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 20:27:05.757364  880249 system_pods.go:61] "nvidia-device-plugin-daemonset-wvfsm" [7bf5fc49-f47e-428c-af3f-1c6152a86830] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0918 20:27:05.757371  880249 system_pods.go:61] "registry-66c9cd494c-6vbt5" [235575f2-9f39-421f-9114-4b36aa14f2ec] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 20:27:05.757393  880249 system_pods.go:61] "registry-proxy-lv8dc" [f5854149-c566-4094-b719-309fabacf2f1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 20:27:05.757406  880249 system_pods.go:61] "snapshot-controller-56fcc65765-h7vhz" [a0d81a21-e913-451f-addb-47ec447e4e20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 20:27:05.757413  880249 system_pods.go:61] "snapshot-controller-56fcc65765-nd9ds" [b76df15f-3e91-448f-a12a-7f3e4f02e9f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 20:27:05.757423  880249 system_pods.go:61] "storage-provisioner" [102d62b5-472e-40b8-9cad-a5aae09e858e] Running
	I0918 20:27:05.757431  880249 system_pods.go:74] duration metric: took 182.428149ms to wait for pod list to return data ...
	I0918 20:27:05.757440  880249 default_sa.go:34] waiting for default service account to be created ...
	I0918 20:27:05.852756  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:05.950184  880249 default_sa.go:45] found service account: "default"
	I0918 20:27:05.950217  880249 default_sa.go:55] duration metric: took 192.765021ms for default service account to be created ...
	I0918 20:27:05.950228  880249 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 20:27:06.172210  880249 system_pods.go:86] 18 kube-system pods found
	I0918 20:27:06.172250  880249 system_pods.go:89] "coredns-7c65d6cfc9-xrmwn" [d1c6d55f-c226-499e-861f-f0c2ff306c58] Running
	I0918 20:27:06.172272  880249 system_pods.go:89] "csi-hostpath-attacher-0" [3fbcf01c-8d95-4804-944c-f5952413e276] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 20:27:06.172312  880249 system_pods.go:89] "csi-hostpath-resizer-0" [caf2cac7-c893-4107-a0e4-b2c93dc1b87c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 20:27:06.172322  880249 system_pods.go:89] "csi-hostpathplugin-rtdl5" [15765f85-3701-490a-8246-dd8c5f9018c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 20:27:06.172332  880249 system_pods.go:89] "etcd-addons-287708" [833077c7-7a2b-47f6-ab92-62f138f80a88] Running
	I0918 20:27:06.172339  880249 system_pods.go:89] "kindnet-hrdvw" [5699511f-5a3a-43dc-a6fc-e440291f9f36] Running
	I0918 20:27:06.172344  880249 system_pods.go:89] "kube-apiserver-addons-287708" [842a2cc0-e894-4f2c-bd8d-0d403c026e5f] Running
	I0918 20:27:06.172355  880249 system_pods.go:89] "kube-controller-manager-addons-287708" [14f73082-5d24-4a68-8b51-0cc718d48c89] Running
	I0918 20:27:06.172374  880249 system_pods.go:89] "kube-ingress-dns-minikube" [262532ac-69f8-4733-a88a-5dcadd8377a3] Running
	I0918 20:27:06.172387  880249 system_pods.go:89] "kube-proxy-ts49q" [2cabee12-188d-4a79-b496-dc5d40d56ff7] Running
	I0918 20:27:06.172393  880249 system_pods.go:89] "kube-scheduler-addons-287708" [9dc0d55b-c557-4de8-874d-cf6510fd2762] Running
	I0918 20:27:06.172413  880249 system_pods.go:89] "metrics-server-84c5f94fbc-8x6cq" [5cf58f18-a262-4368-a7fc-d111916eb6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 20:27:06.172427  880249 system_pods.go:89] "nvidia-device-plugin-daemonset-wvfsm" [7bf5fc49-f47e-428c-af3f-1c6152a86830] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0918 20:27:06.172436  880249 system_pods.go:89] "registry-66c9cd494c-6vbt5" [235575f2-9f39-421f-9114-4b36aa14f2ec] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 20:27:06.172448  880249 system_pods.go:89] "registry-proxy-lv8dc" [f5854149-c566-4094-b719-309fabacf2f1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 20:27:06.172455  880249 system_pods.go:89] "snapshot-controller-56fcc65765-h7vhz" [a0d81a21-e913-451f-addb-47ec447e4e20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 20:27:06.172462  880249 system_pods.go:89] "snapshot-controller-56fcc65765-nd9ds" [b76df15f-3e91-448f-a12a-7f3e4f02e9f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 20:27:06.172471  880249 system_pods.go:89] "storage-provisioner" [102d62b5-472e-40b8-9cad-a5aae09e858e] Running
	I0918 20:27:06.172495  880249 system_pods.go:126] duration metric: took 222.259792ms to wait for k8s-apps to be running ...
	I0918 20:27:06.172504  880249 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 20:27:06.172584  880249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:27:06.200395  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:06.220677  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:06.225903  880249 system_svc.go:56] duration metric: took 53.388372ms WaitForService to wait for kubelet
	I0918 20:27:06.225947  880249 kubeadm.go:582] duration metric: took 17.551176368s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:27:06.225985  880249 node_conditions.go:102] verifying NodePressure condition ...
	I0918 20:27:06.365739  880249 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0918 20:27:06.365785  880249 node_conditions.go:123] node cpu capacity is 2
	I0918 20:27:06.365797  880249 node_conditions.go:105] duration metric: took 139.799179ms to run NodePressure ...
	I0918 20:27:06.365811  880249 start.go:241] waiting for startup goroutines ...
	I0918 20:27:06.367366  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:06.700687  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:06.722392  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:06.852737  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:07.258276  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:07.258971  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:07.352980  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:07.700667  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:07.720332  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:07.853133  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:08.259535  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:08.260258  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:08.353454  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:08.701179  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:08.719611  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:08.853582  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:09.207001  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:09.257048  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:09.358100  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:09.701082  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:09.719756  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:09.852338  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:10.200203  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:10.227996  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:10.352952  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:10.700189  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:10.719564  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:10.852674  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:11.206378  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:11.219824  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:11.354232  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:11.702924  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:11.720958  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:11.857973  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:12.200941  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:12.221769  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:12.354598  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:12.700950  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:12.720178  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:12.853470  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:13.205417  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:13.259055  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:13.365572  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:13.701810  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:13.719584  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:13.852876  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:14.200182  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:14.219203  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:14.365843  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:14.757061  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:14.759187  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:14.857613  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:15.200576  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:15.219821  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:15.352312  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:15.700752  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:15.719203  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:15.853456  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:16.257915  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:16.258473  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:16.352044  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:16.700302  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:16.723991  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:16.853356  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:17.199675  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:17.219869  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:17.351836  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:17.704984  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:17.719184  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:17.852472  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:18.200198  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:18.219301  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:18.355679  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:18.707088  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:18.719838  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:18.852457  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:19.200367  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:19.219606  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:19.352925  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:19.701527  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:19.720085  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:19.853544  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:20.201503  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:20.219617  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:20.353488  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:20.756889  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:20.757629  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:20.852206  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:21.199747  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:21.219434  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:21.375706  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:21.701659  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:21.719776  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:21.852532  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:22.200166  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:22.219463  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:22.353925  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:22.700662  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:22.719713  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:22.853338  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:23.200508  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:23.219982  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:23.352522  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:23.700989  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:23.720171  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:23.853227  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:24.200310  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:24.219271  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:24.352700  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:24.700431  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:24.720317  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:24.852853  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:25.200881  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:25.219769  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 20:27:25.352316  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:25.699749  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:25.720205  880249 kapi.go:107] duration metric: took 27.004602965s to wait for kubernetes.io/minikube-addons=registry ...
	I0918 20:27:25.877200  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:26.202508  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:26.352640  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:26.700672  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:26.853046  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:27.200970  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:27.353012  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:27.759983  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:27.857550  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:28.200012  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:28.353533  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:28.700213  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:28.854382  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:29.200866  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:29.352031  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:29.700974  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:29.852546  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:30.207882  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:30.356159  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:30.703782  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:30.853682  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:31.200459  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:31.352807  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:31.702610  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:31.853618  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:32.256803  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:32.357310  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:32.700114  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:32.853275  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:33.204264  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:33.353660  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:33.759721  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:33.852436  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:34.257559  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:34.358609  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:34.700329  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:34.855076  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:35.201140  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:35.353267  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:35.762159  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:35.852272  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:36.199812  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:36.351811  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:36.700294  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:36.852640  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:37.201482  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:37.353200  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:37.701664  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:37.852611  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:38.200150  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:38.352721  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:38.700015  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:38.852263  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:39.200517  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:39.352834  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:39.701784  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:39.852566  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:40.201079  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:40.353489  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:40.701863  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:40.853645  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:41.200778  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:41.352340  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:41.703612  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:41.852983  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:42.202435  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:42.353611  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:42.703733  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:42.853060  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:43.199723  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:43.352465  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:43.700474  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:43.852896  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:44.200823  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:44.351871  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:44.702136  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:44.862906  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:45.202003  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:45.357480  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:45.700671  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:45.852736  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:46.261071  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:46.352734  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:46.700727  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:46.854095  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:47.200578  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:47.352428  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:47.702326  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:47.853002  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:48.200548  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:48.352736  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:48.757823  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:48.859118  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:49.199788  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:49.352020  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:49.701547  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:49.854023  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:50.205082  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 20:27:50.357886  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:50.701171  880249 kapi.go:107] duration metric: took 51.00591026s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 20:27:50.852821  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:51.353382  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:51.852641  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:52.352231  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:52.853704  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:53.352523  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:53.852199  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:54.352952  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:54.852822  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:55.351973  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:55.852762  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:56.352721  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:56.852693  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:57.353365  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:57.852569  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:58.352689  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:58.852804  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:59.352136  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:27:59.852385  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:00.358410  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:00.852965  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:01.352383  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:01.852584  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:02.352664  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:02.852961  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:03.352737  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:03.851868  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:04.352680  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:04.852286  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:05.353071  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:05.853002  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:06.353838  880249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 20:28:06.853112  880249 kapi.go:107] duration metric: took 1m10.005313851s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0918 20:28:24.155683  880249 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 20:28:24.155709  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:24.654918  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:25.155032  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:25.654992  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:26.155297  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:26.654693  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:27.154784  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:27.654830  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:28.155032  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:28.654934  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:29.155270  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:29.654439  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:30.154812  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:30.654704  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:31.154542  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:31.655219  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:32.155174  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:32.654615  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:33.154564  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:33.655350  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:34.154474  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:34.654328  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:35.155646  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:35.654642  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:36.155866  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:36.654467  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:37.155094  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:37.654690  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:38.155008  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:38.654300  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:39.154537  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:39.654165  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:40.155347  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:40.654161  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:41.154378  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:41.654569  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:42.154877  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:42.654497  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:43.154727  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:43.655292  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:44.154430  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:44.654238  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:45.162172  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:45.654200  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:46.155722  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:46.655127  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:47.155087  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:47.654092  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:48.154842  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:48.655093  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:49.155151  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:49.654791  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:50.154609  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:50.654368  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:51.154773  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:51.654868  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:52.154316  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:52.655071  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:53.154876  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:53.654858  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:54.154899  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:54.654427  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:55.155615  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:55.654979  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:56.154551  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:56.654544  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:57.154672  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:57.654410  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:58.154102  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:58.654031  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:59.155576  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:28:59.654878  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:00.155785  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:00.654502  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:01.154789  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:01.658367  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:02.154302  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:02.654182  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:03.155395  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:03.654783  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:04.155614  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:04.664020  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:05.162334  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:05.655225  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:06.154600  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:06.654805  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:07.154128  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:07.654675  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:08.154657  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:08.654460  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:09.154563  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:09.654821  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:10.155218  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:10.653990  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:11.155304  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:11.654935  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:12.154417  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:12.654719  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:13.154571  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:13.654769  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:14.154535  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:14.653987  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:15.161821  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:15.655384  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:16.155388  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:16.655062  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:17.154734  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:17.654549  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:18.155607  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:18.655473  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:19.154813  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:19.655401  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:20.155227  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:20.655059  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:21.155290  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:21.655535  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:22.155190  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:22.654835  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:23.154516  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:23.654848  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:24.154340  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:24.655266  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:25.154963  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:25.655036  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:26.154516  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:26.654236  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:27.154671  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:27.654625  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:28.154577  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:28.654803  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:29.154324  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:29.654188  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:30.156066  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:30.655308  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:31.155445  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:31.663337  880249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 20:29:32.154916  880249 kapi.go:107] duration metric: took 2m31.004070026s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 20:29:32.156840  880249 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-287708 cluster.
	I0918 20:29:32.159270  880249 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 20:29:32.161387  880249 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 20:29:32.163413  880249 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, volcano, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0918 20:29:32.165324  880249 addons.go:510] duration metric: took 2m43.490328317s for enable addons: enabled=[nvidia-device-plugin cloud-spanner volcano storage-provisioner ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0918 20:29:32.165382  880249 start.go:246] waiting for cluster config update ...
	I0918 20:29:32.165420  880249 start.go:255] writing updated cluster config ...
	I0918 20:29:32.165752  880249 ssh_runner.go:195] Run: rm -f paused
	I0918 20:29:32.540454  880249 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 20:29:32.545503  880249 out.go:177] * Done! kubectl is now configured to use "addons-287708" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c0086a4e886ba       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   4e97850455f2d       gadget-jc8pn
	e5bd5ffb9c0f6       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   50b1b59432b60       gcp-auth-89d5ffd79-87tbw
	64fa9f7abce5d       8b46b1cd48760       4 minutes ago       Running             admission                                0                   fc469f9492082       volcano-admission-77d7d48b68-f9m4v
	23976bdaf821a       289a818c8d9c5       4 minutes ago       Running             controller                               0                   f3c65ceb4289f       ingress-nginx-controller-bc57996ff-wrf7z
	042299f9518bd       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   931cf2903fd96       csi-hostpathplugin-rtdl5
	7d10003015f8a       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   931cf2903fd96       csi-hostpathplugin-rtdl5
	5a45e693cb42e       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   931cf2903fd96       csi-hostpathplugin-rtdl5
	a2679045e3379       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   931cf2903fd96       csi-hostpathplugin-rtdl5
	bee248624216e       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   931cf2903fd96       csi-hostpathplugin-rtdl5
	93041364f0aa7       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   931cf2903fd96       csi-hostpathplugin-rtdl5
	c4969d5f55f6c       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   5ead4b60ae8b6       csi-hostpath-attacher-0
	40d8bb712e16a       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   6e8e7daba7f2b       csi-hostpath-resizer-0
	421adf13b54e6       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   6f29c2c68a268       volcano-scheduler-576bc46687-gn4g8
	a8e4d508e024b       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   c2d2129c3b4c8       snapshot-controller-56fcc65765-nd9ds
	2e080514f762d       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   f417f50890a44       volcano-controllers-56675bb4d5-dzjcw
	9c15106f282dc       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   d0260ec96156a       snapshot-controller-56fcc65765-h7vhz
	72aaf78dd4559       420193b27261a       5 minutes ago       Exited              patch                                    0                   518e6c7abbec9       ingress-nginx-admission-patch-9vs9w
	be79b8c68387b       420193b27261a       5 minutes ago       Exited              create                                   0                   2ee56a437b35d       ingress-nginx-admission-create-2c2dv
	e1ea932f25434       77bdba588b953       5 minutes ago       Running             yakd                                     0                   ff10e1e6361b6       yakd-dashboard-67d98fc6b-m6mdk
	f1da3ae7fc895       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   1c7120cd241db       registry-proxy-lv8dc
	e95429838d73e       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   7cddcfcf6eeea       nvidia-device-plugin-daemonset-wvfsm
	829a45052a2b5       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   1ed11864785c4       local-path-provisioner-86d989889c-m7x9d
	1ca6a80d9f00e       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   5f53ce8ab6814       registry-66c9cd494c-6vbt5
	377274049c731       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   0b8e477c69ade       metrics-server-84c5f94fbc-8x6cq
	76f5dae089e7b       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   0a080e65bb2f5       cloud-spanner-emulator-769b77f747-lllvq
	e44655cb5b65c       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   b1259748a0410       coredns-7c65d6cfc9-xrmwn
	293f2965fe84e       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   c1302eb36bbfd       kube-ingress-dns-minikube
	a09647c96d44f       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   6dc795324c02b       storage-provisioner
	591515334fb2a       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   7b02b9d33c8f1       kindnet-hrdvw
	763d5d393b074       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   8cb26e6d67404       kube-proxy-ts49q
	885e6302d9811       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   4e03af22518b4       kube-controller-manager-addons-287708
	ae541489d762e       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   a9fcc654f757a       kube-scheduler-addons-287708
	750cae71b7a96       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   c659bb250edf9       kube-apiserver-addons-287708
	5b34c4bde5262       27e3830e14027       6 minutes ago       Running             etcd                                     0                   74c9ed685a04f       etcd-addons-287708
	
	
	==> containerd <==
	Sep 18 20:30:33 addons-287708 containerd[814]: time="2024-09-18T20:30:33.052144557Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 18 20:30:33 addons-287708 containerd[814]: time="2024-09-18T20:30:33.055881164Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 125.300915ms"
	Sep 18 20:30:33 addons-287708 containerd[814]: time="2024-09-18T20:30:33.055937615Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 18 20:30:33 addons-287708 containerd[814]: time="2024-09-18T20:30:33.058280403Z" level=info msg="CreateContainer within sandbox \"4e97850455f2d430ec238caade5e70a1d3e286482e163224116a8f860c0b0f4e\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 18 20:30:33 addons-287708 containerd[814]: time="2024-09-18T20:30:33.082227612Z" level=info msg="CreateContainer within sandbox \"4e97850455f2d430ec238caade5e70a1d3e286482e163224116a8f860c0b0f4e\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237\""
	Sep 18 20:30:33 addons-287708 containerd[814]: time="2024-09-18T20:30:33.083189186Z" level=info msg="StartContainer for \"c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237\""
	Sep 18 20:30:33 addons-287708 containerd[814]: time="2024-09-18T20:30:33.139808055Z" level=info msg="StartContainer for \"c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237\" returns successfully"
	Sep 18 20:30:34 addons-287708 containerd[814]: time="2024-09-18T20:30:34.681707348Z" level=error msg="ExecSync for \"c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237\" failed" error="failed to exec in container: failed to start exec \"0432a28766fe474976402587a368efba707f02fc2970e6ad12c1261a2d45f2d3\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 18 20:30:34 addons-287708 containerd[814]: time="2024-09-18T20:30:34.693010435Z" level=error msg="ExecSync for \"c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237\" failed" error="failed to exec in container: failed to start exec \"ffa1e3d012c5d2128d1724dc3df260c8639237a4fa7db193ff1d286b7be63c06\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 18 20:30:34 addons-287708 containerd[814]: time="2024-09-18T20:30:34.707055112Z" level=error msg="ExecSync for \"c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237\" failed" error="failed to exec in container: failed to start exec \"99b400095de672da6c8fc5cecfc221e4335e1df398a5a5838a5b85672b505d22\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 18 20:30:34 addons-287708 containerd[814]: time="2024-09-18T20:30:34.829886946Z" level=info msg="shim disconnected" id=c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237 namespace=k8s.io
	Sep 18 20:30:34 addons-287708 containerd[814]: time="2024-09-18T20:30:34.829947278Z" level=warning msg="cleaning up after shim disconnected" id=c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237 namespace=k8s.io
	Sep 18 20:30:34 addons-287708 containerd[814]: time="2024-09-18T20:30:34.829959200Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 18 20:30:35 addons-287708 containerd[814]: time="2024-09-18T20:30:35.247903257Z" level=info msg="RemoveContainer for \"e2545afa9c1c515c49bcb22b818cc9911dbd5e2f8138f009a6acaafe2e9600c1\""
	Sep 18 20:30:35 addons-287708 containerd[814]: time="2024-09-18T20:30:35.255025201Z" level=info msg="RemoveContainer for \"e2545afa9c1c515c49bcb22b818cc9911dbd5e2f8138f009a6acaafe2e9600c1\" returns successfully"
	Sep 18 20:30:43 addons-287708 containerd[814]: time="2024-09-18T20:30:43.993999296Z" level=info msg="RemoveContainer for \"0fe1be14bedbfd22273841f9ef36df753c0ef50dfd214a1cbcd3ab7388e4f2d3\""
	Sep 18 20:30:44 addons-287708 containerd[814]: time="2024-09-18T20:30:44.001050266Z" level=info msg="RemoveContainer for \"0fe1be14bedbfd22273841f9ef36df753c0ef50dfd214a1cbcd3ab7388e4f2d3\" returns successfully"
	Sep 18 20:30:44 addons-287708 containerd[814]: time="2024-09-18T20:30:44.007899186Z" level=info msg="StopPodSandbox for \"ab2a447f84810fb4e8b750d96b16bc7141a053133f8571d318c592b8975fd8ea\""
	Sep 18 20:30:44 addons-287708 containerd[814]: time="2024-09-18T20:30:44.018694639Z" level=info msg="TearDown network for sandbox \"ab2a447f84810fb4e8b750d96b16bc7141a053133f8571d318c592b8975fd8ea\" successfully"
	Sep 18 20:30:44 addons-287708 containerd[814]: time="2024-09-18T20:30:44.018882561Z" level=info msg="StopPodSandbox for \"ab2a447f84810fb4e8b750d96b16bc7141a053133f8571d318c592b8975fd8ea\" returns successfully"
	Sep 18 20:30:44 addons-287708 containerd[814]: time="2024-09-18T20:30:44.019519515Z" level=info msg="RemovePodSandbox for \"ab2a447f84810fb4e8b750d96b16bc7141a053133f8571d318c592b8975fd8ea\""
	Sep 18 20:30:44 addons-287708 containerd[814]: time="2024-09-18T20:30:44.019562813Z" level=info msg="Forcibly stopping sandbox \"ab2a447f84810fb4e8b750d96b16bc7141a053133f8571d318c592b8975fd8ea\""
	Sep 18 20:30:44 addons-287708 containerd[814]: time="2024-09-18T20:30:44.027153827Z" level=info msg="TearDown network for sandbox \"ab2a447f84810fb4e8b750d96b16bc7141a053133f8571d318c592b8975fd8ea\" successfully"
	Sep 18 20:30:44 addons-287708 containerd[814]: time="2024-09-18T20:30:44.033684364Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab2a447f84810fb4e8b750d96b16bc7141a053133f8571d318c592b8975fd8ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 18 20:30:44 addons-287708 containerd[814]: time="2024-09-18T20:30:44.033811165Z" level=info msg="RemovePodSandbox \"ab2a447f84810fb4e8b750d96b16bc7141a053133f8571d318c592b8975fd8ea\" returns successfully"
	
	
	==> coredns [e44655cb5b65c07f8e15efc4ddf0c669aa99629b33353e7a8b1e10dd17841f5c] <==
	[INFO] 10.244.0.7:52113 - 13082 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010011s
	[INFO] 10.244.0.7:56451 - 15831 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002420795s
	[INFO] 10.244.0.7:56451 - 62680 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002078256s
	[INFO] 10.244.0.7:36061 - 1013 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014487s
	[INFO] 10.244.0.7:36061 - 35819 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00009042s
	[INFO] 10.244.0.7:49180 - 24127 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122437s
	[INFO] 10.244.0.7:49180 - 64827 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000274371s
	[INFO] 10.244.0.7:51715 - 51340 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088451s
	[INFO] 10.244.0.7:51715 - 55182 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000160574s
	[INFO] 10.244.0.7:36756 - 59472 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067635s
	[INFO] 10.244.0.7:36756 - 862 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054827s
	[INFO] 10.244.0.7:40776 - 32832 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005254052s
	[INFO] 10.244.0.7:40776 - 20803 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005374052s
	[INFO] 10.244.0.7:50024 - 44888 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081641s
	[INFO] 10.244.0.7:50024 - 34132 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000063081s
	[INFO] 10.244.0.24:45545 - 56490 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155766s
	[INFO] 10.244.0.24:34006 - 13765 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000109218s
	[INFO] 10.244.0.24:59573 - 46584 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000089173s
	[INFO] 10.244.0.24:60451 - 53420 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000085374s
	[INFO] 10.244.0.24:40073 - 51902 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000084341s
	[INFO] 10.244.0.24:57444 - 53980 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085284s
	[INFO] 10.244.0.24:52901 - 6796 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002850939s
	[INFO] 10.244.0.24:53198 - 48474 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002770299s
	[INFO] 10.244.0.24:47180 - 16440 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002046608s
	[INFO] 10.244.0.24:47117 - 43221 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001862355s
	
	
	==> describe nodes <==
	Name:               addons-287708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-287708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=addons-287708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_26_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-287708
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-287708"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:26:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-287708
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:32:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:29:47 +0000   Wed, 18 Sep 2024 20:26:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:29:47 +0000   Wed, 18 Sep 2024 20:26:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:29:47 +0000   Wed, 18 Sep 2024 20:26:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:29:47 +0000   Wed, 18 Sep 2024 20:26:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-287708
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 00f173cd24a74972b51df633cf34a84d
	  System UUID:                ff450faf-49f4-4b92-83b4-c9ad423ac900
	  Boot ID:                    3a935d26-70f7-413a-bfb9-48f0fb4fad17
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-lllvq     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  gadget                      gadget-jc8pn                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-87tbw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-wrf7z    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-xrmwn                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-rtdl5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-287708                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-hrdvw                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m2s
	  kube-system                 kube-apiserver-addons-287708                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-controller-manager-addons-287708       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-ts49q                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-addons-287708                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 metrics-server-84c5f94fbc-8x6cq             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-wvfsm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-66c9cd494c-6vbt5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-proxy-lv8dc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 snapshot-controller-56fcc65765-h7vhz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-nd9ds        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  local-path-storage          local-path-provisioner-86d989889c-m7x9d     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  volcano-system              volcano-admission-77d7d48b68-f9m4v          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-dzjcw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-scheduler-576bc46687-gn4g8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-m6mdk              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 6m    kube-proxy       
	  Normal   Starting                 6m8s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m7s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s  kubelet          Node addons-287708 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s  kubelet          Node addons-287708 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s  kubelet          Node addons-287708 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s  node-controller  Node addons-287708 event: Registered Node addons-287708 in Controller
	
	
	==> dmesg <==
	[Sep18 19:25] FS-Cache: Duplicate cookie detected
	[  +0.000711] FS-Cache: O-cookie c=00000037 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000988] FS-Cache: O-cookie d=00000000914407f3{9P.session} n=00000000d90436ff
	[  +0.001261] FS-Cache: O-key=[10] '34323937373134353332'
	[  +0.000791] FS-Cache: N-cookie c=00000038 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000967] FS-Cache: N-cookie d=00000000914407f3{9P.session} n=0000000085598cfe
	[  +0.001101] FS-Cache: N-key=[10] '34323937373134353332'
	
	
	==> etcd [5b34c4bde5262d444fb7aeedaf32e431bfd45fef92505024553188cad9d01730] <==
	{"level":"info","ts":"2024-09-18T20:26:37.125434Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-18T20:26:37.125459Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-18T20:26:37.125477Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-18T20:26:37.127144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-18T20:26:37.127246Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-18T20:26:38.008145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-18T20:26:38.008287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-18T20:26:38.008373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-18T20:26:38.008421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-18T20:26:38.008458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-18T20:26:38.008505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-18T20:26:38.008547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-18T20:26:38.012278Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-287708 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T20:26:38.012640Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:26:38.012744Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:26:38.013127Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:26:38.015345Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T20:26:38.015509Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T20:26:38.015781Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:26:38.016955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-18T20:26:38.017392Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:26:38.017494Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:26:38.017532Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:26:38.020457Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:26:38.025316Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [e5bd5ffb9c0f6dbee23bd2651435c86d99b26feb6839fd3cccfbd3e36158045d] <==
	2024/09/18 20:29:31 GCP Auth Webhook started!
	2024/09/18 20:29:48 Ready to marshal response ...
	2024/09/18 20:29:48 Ready to write response ...
	2024/09/18 20:29:50 Ready to marshal response ...
	2024/09/18 20:29:50 Ready to write response ...
	
	
	==> kernel <==
	 20:32:51 up  4:15,  0 users,  load average: 0.61, 1.76, 2.45
	Linux addons-287708 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [591515334fb2a1d69831229789469907ab6ba0b4c4c513c04b48381a56cbf1aa] <==
	I0918 20:30:51.014175       1 main.go:299] handling current node
	I0918 20:31:01.024565       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:31:01.024601       1 main.go:299] handling current node
	I0918 20:31:11.020184       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:31:11.020220       1 main.go:299] handling current node
	I0918 20:31:21.013407       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:31:21.013609       1 main.go:299] handling current node
	I0918 20:31:31.017607       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:31:31.017647       1 main.go:299] handling current node
	I0918 20:31:41.020842       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:31:41.020877       1 main.go:299] handling current node
	I0918 20:31:51.013536       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:31:51.013574       1 main.go:299] handling current node
	I0918 20:32:01.018807       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:32:01.018845       1 main.go:299] handling current node
	I0918 20:32:11.022175       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:32:11.022209       1 main.go:299] handling current node
	I0918 20:32:21.016360       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:32:21.016398       1 main.go:299] handling current node
	I0918 20:32:31.018897       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:32:31.018932       1 main.go:299] handling current node
	I0918 20:32:41.013094       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:32:41.013131       1 main.go:299] handling current node
	I0918 20:32:51.013547       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0918 20:32:51.013584       1 main.go:299] handling current node
	
	
	==> kube-apiserver [750cae71b7a96d673a41ae1302de983f962f7c8bcd7a8569f010f5014df0c3d5] <==
	W0918 20:28:02.189618       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:03.217980       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:04.135424       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.41.159:443: connect: connection refused
	E0918 20:28:04.135465       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.41.159:443: connect: connection refused" logger="UnhandledError"
	W0918 20:28:04.137187       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:04.174041       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.41.159:443: connect: connection refused
	E0918 20:28:04.174079       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.41.159:443: connect: connection refused" logger="UnhandledError"
	W0918 20:28:04.175645       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:04.254538       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:05.269911       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:06.329478       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:07.402678       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:08.483997       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:09.533043       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:10.586390       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:11.653825       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:12.730702       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.50.85:443: connect: connection refused
	W0918 20:28:24.041625       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.41.159:443: connect: connection refused
	E0918 20:28:24.041669       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.41.159:443: connect: connection refused" logger="UnhandledError"
	W0918 20:29:04.145762       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.41.159:443: connect: connection refused
	E0918 20:29:04.145800       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.41.159:443: connect: connection refused" logger="UnhandledError"
	W0918 20:29:04.182197       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.41.159:443: connect: connection refused
	E0918 20:29:04.182235       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.41.159:443: connect: connection refused" logger="UnhandledError"
	I0918 20:29:49.076006       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0918 20:29:49.112104       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [885e6302d981153ff15f5647d19c5b57928591567a82ec58f4384050addcfc6a] <==
	I0918 20:29:04.205706       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0918 20:29:04.213186       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0918 20:29:04.221769       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0918 20:29:04.233579       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0918 20:29:04.917058       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0918 20:29:05.182315       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0918 20:29:05.933461       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0918 20:29:05.949439       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0918 20:29:06.188815       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0918 20:29:07.072786       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0918 20:29:07.111644       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0918 20:29:07.196911       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0918 20:29:07.210235       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0918 20:29:07.218906       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0918 20:29:08.119252       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0918 20:29:08.130858       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0918 20:29:08.137407       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0918 20:29:32.050873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="9.075163ms"
	I0918 20:29:32.051925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="37.81µs"
	I0918 20:29:37.067500       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0918 20:29:37.106637       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0918 20:29:38.016510       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0918 20:29:38.049808       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0918 20:29:47.879889       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-287708"
	I0918 20:29:48.793164       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [763d5d393b0745da2308ef6e486d7bcbf35049d3c30b9077be7dc400f225f59c] <==
	I0918 20:26:50.441426       1 server_linux.go:66] "Using iptables proxy"
	I0918 20:26:50.510800       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0918 20:26:50.510870       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:26:50.544725       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0918 20:26:50.544781       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:26:50.546586       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:26:50.547084       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:26:50.547110       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:26:50.559794       1 config.go:199] "Starting service config controller"
	I0918 20:26:50.559840       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:26:50.559864       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:26:50.559868       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:26:50.560461       1 config.go:328] "Starting node config controller"
	I0918 20:26:50.560471       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:26:50.660487       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:26:50.660518       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:26:50.660582       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ae541489d762e0287fe4791386f773d12f6bd6d09d6c239859bb70c9ec02eb9c] <==
	W0918 20:26:42.164914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 20:26:42.165979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:26:42.164970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 20:26:42.166107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:26:42.165033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 20:26:42.168117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:26:42.165128       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 20:26:42.168289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:26:42.165189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 20:26:42.168416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:26:42.165256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 20:26:42.168620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0918 20:26:42.165325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 20:26:42.168908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:26:42.165364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 20:26:42.169125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:26:42.165424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 20:26:42.171833       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 20:26:42.165483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 20:26:42.172339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:26:42.165539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 20:26:42.172582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 20:26:42.165727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 20:26:42.172805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0918 20:26:43.353028       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 20:30:49 addons-287708 kubelet[1497]: E0918 20:30:49.929394    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jc8pn_gadget(553062d5-dcee-4b4e-80cb-1e3db7c451c8)\"" pod="gadget/gadget-jc8pn" podUID="553062d5-dcee-4b4e-80cb-1e3db7c451c8"
	Sep 18 20:31:02 addons-287708 kubelet[1497]: I0918 20:31:02.928613    1497 scope.go:117] "RemoveContainer" containerID="c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237"
	Sep 18 20:31:02 addons-287708 kubelet[1497]: E0918 20:31:02.928832    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jc8pn_gadget(553062d5-dcee-4b4e-80cb-1e3db7c451c8)\"" pod="gadget/gadget-jc8pn" podUID="553062d5-dcee-4b4e-80cb-1e3db7c451c8"
	Sep 18 20:31:03 addons-287708 kubelet[1497]: I0918 20:31:03.929622    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wvfsm" secret="" err="secret \"gcp-auth\" not found"
	Sep 18 20:31:16 addons-287708 kubelet[1497]: I0918 20:31:16.928589    1497 scope.go:117] "RemoveContainer" containerID="c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237"
	Sep 18 20:31:16 addons-287708 kubelet[1497]: E0918 20:31:16.929207    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jc8pn_gadget(553062d5-dcee-4b4e-80cb-1e3db7c451c8)\"" pod="gadget/gadget-jc8pn" podUID="553062d5-dcee-4b4e-80cb-1e3db7c451c8"
	Sep 18 20:31:19 addons-287708 kubelet[1497]: I0918 20:31:19.928944    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-6vbt5" secret="" err="secret \"gcp-auth\" not found"
	Sep 18 20:31:20 addons-287708 kubelet[1497]: I0918 20:31:20.928319    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-lv8dc" secret="" err="secret \"gcp-auth\" not found"
	Sep 18 20:31:30 addons-287708 kubelet[1497]: I0918 20:31:30.929013    1497 scope.go:117] "RemoveContainer" containerID="c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237"
	Sep 18 20:31:30 addons-287708 kubelet[1497]: E0918 20:31:30.929181    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jc8pn_gadget(553062d5-dcee-4b4e-80cb-1e3db7c451c8)\"" pod="gadget/gadget-jc8pn" podUID="553062d5-dcee-4b4e-80cb-1e3db7c451c8"
	Sep 18 20:31:41 addons-287708 kubelet[1497]: I0918 20:31:41.928422    1497 scope.go:117] "RemoveContainer" containerID="c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237"
	Sep 18 20:31:41 addons-287708 kubelet[1497]: E0918 20:31:41.929115    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jc8pn_gadget(553062d5-dcee-4b4e-80cb-1e3db7c451c8)\"" pod="gadget/gadget-jc8pn" podUID="553062d5-dcee-4b4e-80cb-1e3db7c451c8"
	Sep 18 20:31:53 addons-287708 kubelet[1497]: I0918 20:31:53.929654    1497 scope.go:117] "RemoveContainer" containerID="c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237"
	Sep 18 20:31:53 addons-287708 kubelet[1497]: E0918 20:31:53.929852    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jc8pn_gadget(553062d5-dcee-4b4e-80cb-1e3db7c451c8)\"" pod="gadget/gadget-jc8pn" podUID="553062d5-dcee-4b4e-80cb-1e3db7c451c8"
	Sep 18 20:32:05 addons-287708 kubelet[1497]: I0918 20:32:05.928586    1497 scope.go:117] "RemoveContainer" containerID="c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237"
	Sep 18 20:32:05 addons-287708 kubelet[1497]: E0918 20:32:05.928779    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jc8pn_gadget(553062d5-dcee-4b4e-80cb-1e3db7c451c8)\"" pod="gadget/gadget-jc8pn" podUID="553062d5-dcee-4b4e-80cb-1e3db7c451c8"
	Sep 18 20:32:07 addons-287708 kubelet[1497]: I0918 20:32:07.929170    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wvfsm" secret="" err="secret \"gcp-auth\" not found"
	Sep 18 20:32:19 addons-287708 kubelet[1497]: I0918 20:32:19.928542    1497 scope.go:117] "RemoveContainer" containerID="c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237"
	Sep 18 20:32:19 addons-287708 kubelet[1497]: E0918 20:32:19.929215    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jc8pn_gadget(553062d5-dcee-4b4e-80cb-1e3db7c451c8)\"" pod="gadget/gadget-jc8pn" podUID="553062d5-dcee-4b4e-80cb-1e3db7c451c8"
	Sep 18 20:32:33 addons-287708 kubelet[1497]: I0918 20:32:33.929873    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-lv8dc" secret="" err="secret \"gcp-auth\" not found"
	Sep 18 20:32:33 addons-287708 kubelet[1497]: I0918 20:32:33.931296    1497 scope.go:117] "RemoveContainer" containerID="c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237"
	Sep 18 20:32:33 addons-287708 kubelet[1497]: E0918 20:32:33.931589    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jc8pn_gadget(553062d5-dcee-4b4e-80cb-1e3db7c451c8)\"" pod="gadget/gadget-jc8pn" podUID="553062d5-dcee-4b4e-80cb-1e3db7c451c8"
	Sep 18 20:32:46 addons-287708 kubelet[1497]: I0918 20:32:46.928704    1497 scope.go:117] "RemoveContainer" containerID="c0086a4e886ba51ea77ff767b411767d5bfaa917df2cc681c107deb7bcb78237"
	Sep 18 20:32:46 addons-287708 kubelet[1497]: E0918 20:32:46.928926    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jc8pn_gadget(553062d5-dcee-4b4e-80cb-1e3db7c451c8)\"" pod="gadget/gadget-jc8pn" podUID="553062d5-dcee-4b4e-80cb-1e3db7c451c8"
	Sep 18 20:32:48 addons-287708 kubelet[1497]: I0918 20:32:48.928838    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-6vbt5" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [a09647c96d44f0a0158e0fec8bff9a5f689dea5ffab7274dee38091b7c2c4976] <==
	I0918 20:26:54.092043       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 20:26:54.144396       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 20:26:54.144438       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 20:26:54.167300       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 20:26:54.167713       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13c85113-521f-45e7-8535-a66e9b5800b1", APIVersion:"v1", ResourceVersion:"517", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-287708_e514bb74-4794-45e1-8352-6b6143354299 became leader
	I0918 20:26:54.167741       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-287708_e514bb74-4794-45e1-8352-6b6143354299!
	I0918 20:26:54.268591       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-287708_e514bb74-4794-45e1-8352-6b6143354299!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-287708 -n addons-287708
helpers_test.go:261: (dbg) Run:  kubectl --context addons-287708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-2c2dv ingress-nginx-admission-patch-9vs9w test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-287708 describe pod ingress-nginx-admission-create-2c2dv ingress-nginx-admission-patch-9vs9w test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-287708 describe pod ingress-nginx-admission-create-2c2dv ingress-nginx-admission-patch-9vs9w test-job-nginx-0: exit status 1 (79.83169ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2c2dv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9vs9w" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-287708 describe pod ingress-nginx-admission-create-2c2dv ingress-nginx-admission-patch-9vs9w test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (374.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-025914 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0918 21:21:36.414440  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-025914 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m10.110746488s)

                                                
                                                
-- stdout --
	* [old-k8s-version-025914] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-025914" primary control-plane node in "old-k8s-version-025914" cluster
	* Pulling base image v0.0.45-1726589491-19662 ...
	* Restarting existing docker container for "old-k8s-version-025914" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-025914 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 21:21:26.616592 1096772 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:21:26.616790 1096772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:21:26.616802 1096772 out.go:358] Setting ErrFile to fd 2...
	I0918 21:21:26.616807 1096772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:21:26.617046 1096772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	I0918 21:21:26.617399 1096772 out.go:352] Setting JSON to false
	I0918 21:21:26.618324 1096772 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18235,"bootTime":1726676252,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0918 21:21:26.618394 1096772 start.go:139] virtualization:  
	I0918 21:21:26.621091 1096772 out.go:177] * [old-k8s-version-025914] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0918 21:21:26.623461 1096772 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:21:26.623530 1096772 notify.go:220] Checking for updates...
	I0918 21:21:26.628684 1096772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:21:26.630640 1096772 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 21:21:26.632469 1096772 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	I0918 21:21:26.634102 1096772 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 21:21:26.635874 1096772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:21:26.638509 1096772 config.go:182] Loaded profile config "old-k8s-version-025914": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0918 21:21:26.640902 1096772 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 21:21:26.642659 1096772 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:21:26.679103 1096772 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 21:21:26.679275 1096772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 21:21:26.754008 1096772 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2024-09-18 21:21:26.744340016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 21:21:26.754127 1096772 docker.go:318] overlay module found
	I0918 21:21:26.755993 1096772 out.go:177] * Using the docker driver based on existing profile
	I0918 21:21:26.758265 1096772 start.go:297] selected driver: docker
	I0918 21:21:26.758283 1096772 start.go:901] validating driver "docker" against &{Name:old-k8s-version-025914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-025914 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:21:26.758400 1096772 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:21:26.758990 1096772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 21:21:26.878072 1096772 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2024-09-18 21:21:26.866324275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 21:21:26.878467 1096772 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:21:26.878486 1096772 cni.go:84] Creating CNI manager for ""
	I0918 21:21:26.878525 1096772 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0918 21:21:26.878559 1096772 start.go:340] cluster config:
	{Name:old-k8s-version-025914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-025914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:21:26.880789 1096772 out.go:177] * Starting "old-k8s-version-025914" primary control-plane node in "old-k8s-version-025914" cluster
	I0918 21:21:26.882845 1096772 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0918 21:21:26.885750 1096772 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0918 21:21:26.887197 1096772 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0918 21:21:26.887253 1096772 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-874114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0918 21:21:26.887263 1096772 cache.go:56] Caching tarball of preloaded images
	I0918 21:21:26.887341 1096772 preload.go:172] Found /home/jenkins/minikube-integration/19667-874114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 21:21:26.887351 1096772 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0918 21:21:26.887467 1096772 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/config.json ...
	I0918 21:21:26.887679 1096772 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	W0918 21:21:26.908671 1096772 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0918 21:21:26.908691 1096772 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 21:21:26.908769 1096772 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 21:21:26.908786 1096772 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0918 21:21:26.908790 1096772 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0918 21:21:26.908797 1096772 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0918 21:21:26.908803 1096772 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0918 21:21:27.040052 1096772 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0918 21:21:27.040177 1096772 cache.go:194] Successfully downloaded all kic artifacts
	I0918 21:21:27.040220 1096772 start.go:360] acquireMachinesLock for old-k8s-version-025914: {Name:mkfef6ada30889277249f8dbae761d3584c169dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:21:27.040289 1096772 start.go:364] duration metric: took 43.142µs to acquireMachinesLock for "old-k8s-version-025914"
	I0918 21:21:27.040310 1096772 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:21:27.040315 1096772 fix.go:54] fixHost starting: 
	I0918 21:21:27.040614 1096772 cli_runner.go:164] Run: docker container inspect old-k8s-version-025914 --format={{.State.Status}}
	I0918 21:21:27.073181 1096772 fix.go:112] recreateIfNeeded on old-k8s-version-025914: state=Stopped err=<nil>
	W0918 21:21:27.073230 1096772 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:21:27.075240 1096772 out.go:177] * Restarting existing docker container for "old-k8s-version-025914" ...
	I0918 21:21:27.076838 1096772 cli_runner.go:164] Run: docker start old-k8s-version-025914
	I0918 21:21:27.433030 1096772 cli_runner.go:164] Run: docker container inspect old-k8s-version-025914 --format={{.State.Status}}
	I0918 21:21:27.482808 1096772 kic.go:430] container "old-k8s-version-025914" state is running.
	I0918 21:21:27.483193 1096772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-025914
	I0918 21:21:27.525311 1096772 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/config.json ...
	I0918 21:21:27.525908 1096772 machine.go:93] provisionDockerMachine start ...
	I0918 21:21:27.525975 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:27.559154 1096772 main.go:141] libmachine: Using SSH client type: native
	I0918 21:21:27.559413 1096772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34175 <nil> <nil>}
	I0918 21:21:27.559423 1096772 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:21:27.560510 1096772 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0918 21:21:30.716175 1096772 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-025914
	
	I0918 21:21:30.716217 1096772 ubuntu.go:169] provisioning hostname "old-k8s-version-025914"
	I0918 21:21:30.716342 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:30.738182 1096772 main.go:141] libmachine: Using SSH client type: native
	I0918 21:21:30.738441 1096772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34175 <nil> <nil>}
	I0918 21:21:30.738461 1096772 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-025914 && echo "old-k8s-version-025914" | sudo tee /etc/hostname
	I0918 21:21:30.897586 1096772 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-025914
	
	I0918 21:21:30.897672 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:30.918810 1096772 main.go:141] libmachine: Using SSH client type: native
	I0918 21:21:30.919074 1096772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34175 <nil> <nil>}
	I0918 21:21:30.919100 1096772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-025914' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-025914/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-025914' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:21:31.065618 1096772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:21:31.065647 1096772 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19667-874114/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-874114/.minikube}
	I0918 21:21:31.065691 1096772 ubuntu.go:177] setting up certificates
	I0918 21:21:31.065707 1096772 provision.go:84] configureAuth start
	I0918 21:21:31.065771 1096772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-025914
	I0918 21:21:31.095639 1096772 provision.go:143] copyHostCerts
	I0918 21:21:31.095731 1096772 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-874114/.minikube/cert.pem, removing ...
	I0918 21:21:31.095751 1096772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-874114/.minikube/cert.pem
	I0918 21:21:31.095841 1096772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-874114/.minikube/cert.pem (1123 bytes)
	I0918 21:21:31.095994 1096772 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-874114/.minikube/key.pem, removing ...
	I0918 21:21:31.096009 1096772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-874114/.minikube/key.pem
	I0918 21:21:31.096045 1096772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-874114/.minikube/key.pem (1679 bytes)
	I0918 21:21:31.096211 1096772 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-874114/.minikube/ca.pem, removing ...
	I0918 21:21:31.096224 1096772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-874114/.minikube/ca.pem
	I0918 21:21:31.096260 1096772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-874114/.minikube/ca.pem (1082 bytes)
	I0918 21:21:31.096359 1096772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-874114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-025914 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-025914]
	I0918 21:21:31.489344 1096772 provision.go:177] copyRemoteCerts
	I0918 21:21:31.489421 1096772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:21:31.489468 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:31.507123 1096772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34175 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/old-k8s-version-025914/id_rsa Username:docker}
	I0918 21:21:31.609257 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:21:31.635439 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0918 21:21:31.661301 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:21:31.687255 1096772 provision.go:87] duration metric: took 621.529719ms to configureAuth
	I0918 21:21:31.687283 1096772 ubuntu.go:193] setting minikube options for container-runtime
	I0918 21:21:31.687483 1096772 config.go:182] Loaded profile config "old-k8s-version-025914": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0918 21:21:31.687497 1096772 machine.go:96] duration metric: took 4.161575418s to provisionDockerMachine
	I0918 21:21:31.687505 1096772 start.go:293] postStartSetup for "old-k8s-version-025914" (driver="docker")
	I0918 21:21:31.687515 1096772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:21:31.687570 1096772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:21:31.687618 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:31.717515 1096772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34175 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/old-k8s-version-025914/id_rsa Username:docker}
	I0918 21:21:31.826143 1096772 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:21:31.834749 1096772 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 21:21:31.834789 1096772 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 21:21:31.834801 1096772 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 21:21:31.834809 1096772 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0918 21:21:31.834823 1096772 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-874114/.minikube/addons for local assets ...
	I0918 21:21:31.834887 1096772 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-874114/.minikube/files for local assets ...
	I0918 21:21:31.834971 1096772 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-874114/.minikube/files/etc/ssl/certs/8794972.pem -> 8794972.pem in /etc/ssl/certs
	I0918 21:21:31.835082 1096772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:21:31.844361 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/files/etc/ssl/certs/8794972.pem --> /etc/ssl/certs/8794972.pem (1708 bytes)
	I0918 21:21:31.870526 1096772 start.go:296] duration metric: took 183.0042ms for postStartSetup
	I0918 21:21:31.870638 1096772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 21:21:31.870691 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:31.887910 1096772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34175 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/old-k8s-version-025914/id_rsa Username:docker}
	I0918 21:21:31.985507 1096772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 21:21:31.990131 1096772 fix.go:56] duration metric: took 4.949806358s for fixHost
	I0918 21:21:31.990156 1096772 start.go:83] releasing machines lock for "old-k8s-version-025914", held for 4.949858502s
	I0918 21:21:31.990223 1096772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-025914
	I0918 21:21:32.016278 1096772 ssh_runner.go:195] Run: cat /version.json
	I0918 21:21:32.016335 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:32.016593 1096772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:21:32.016677 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:32.048245 1096772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34175 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/old-k8s-version-025914/id_rsa Username:docker}
	I0918 21:21:32.061535 1096772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34175 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/old-k8s-version-025914/id_rsa Username:docker}
	I0918 21:21:32.152209 1096772 ssh_runner.go:195] Run: systemctl --version
	I0918 21:21:32.285500 1096772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 21:21:32.289993 1096772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0918 21:21:32.308113 1096772 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0918 21:21:32.308238 1096772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:21:32.317855 1096772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0918 21:21:32.317931 1096772 start.go:495] detecting cgroup driver to use...
	I0918 21:21:32.317978 1096772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 21:21:32.318062 1096772 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 21:21:32.333335 1096772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 21:21:32.346454 1096772 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:21:32.346580 1096772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:21:32.360896 1096772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:21:32.374024 1096772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:21:32.476703 1096772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:21:32.593767 1096772 docker.go:233] disabling docker service ...
	I0918 21:21:32.593862 1096772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:21:32.611603 1096772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:21:32.626962 1096772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:21:32.740134 1096772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:21:32.879026 1096772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:21:32.894287 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:21:32.914124 1096772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0918 21:21:32.925986 1096772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 21:21:32.937526 1096772 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 21:21:32.937658 1096772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 21:21:32.949623 1096772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 21:21:32.961339 1096772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 21:21:32.971758 1096772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 21:21:32.982604 1096772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:21:32.992095 1096772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 21:21:33.003508 1096772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:21:33.015315 1096772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:21:33.027035 1096772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:21:33.135113 1096772 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 21:21:33.347961 1096772 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0918 21:21:33.348101 1096772 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0918 21:21:33.356484 1096772 start.go:563] Will wait 60s for crictl version
	I0918 21:21:33.356600 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:21:33.360554 1096772 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:21:33.410722 1096772 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0918 21:21:33.410871 1096772 ssh_runner.go:195] Run: containerd --version
	I0918 21:21:33.432116 1096772 ssh_runner.go:195] Run: containerd --version
	I0918 21:21:33.457169 1096772 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0918 21:21:33.459463 1096772 cli_runner.go:164] Run: docker network inspect old-k8s-version-025914 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 21:21:33.475180 1096772 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0918 21:21:33.479038 1096772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:21:33.490301 1096772 kubeadm.go:883] updating cluster {Name:old-k8s-version-025914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-025914 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:21:33.490421 1096772 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0918 21:21:33.490493 1096772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:21:33.595969 1096772 containerd.go:627] all images are preloaded for containerd runtime.
	I0918 21:21:33.595990 1096772 containerd.go:534] Images already preloaded, skipping extraction
	I0918 21:21:33.596051 1096772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:21:33.658116 1096772 containerd.go:627] all images are preloaded for containerd runtime.
	I0918 21:21:33.658190 1096772 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:21:33.658218 1096772 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0918 21:21:33.658383 1096772 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-025914 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-025914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:21:33.658488 1096772 ssh_runner.go:195] Run: sudo crictl info
	I0918 21:21:33.719569 1096772 cni.go:84] Creating CNI manager for ""
	I0918 21:21:33.719603 1096772 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0918 21:21:33.719618 1096772 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:21:33.719640 1096772 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-025914 NodeName:old-k8s-version-025914 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 21:21:33.719797 1096772 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-025914"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:21:33.719883 1096772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0918 21:21:33.731886 1096772 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:21:33.732056 1096772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:21:33.743654 1096772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0918 21:21:33.767301 1096772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:21:33.790987 1096772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0918 21:21:33.818845 1096772 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0918 21:21:33.823353 1096772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:21:33.835720 1096772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:21:33.961135 1096772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:21:33.982199 1096772 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914 for IP: 192.168.85.2
	I0918 21:21:33.982287 1096772 certs.go:194] generating shared ca certs ...
	I0918 21:21:33.982324 1096772 certs.go:226] acquiring lock for ca certs: {Name:mk4a2e50bce1acd2df63eb42e5a33734237a5b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:21:33.982565 1096772 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-874114/.minikube/ca.key
	I0918 21:21:33.982661 1096772 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.key
	I0918 21:21:33.982710 1096772 certs.go:256] generating profile certs ...
	I0918 21:21:33.982897 1096772 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.key
	I0918 21:21:33.983053 1096772 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/apiserver.key.f927fb46
	I0918 21:21:33.983151 1096772 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/proxy-client.key
	I0918 21:21:33.983371 1096772 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/879497.pem (1338 bytes)
	W0918 21:21:33.983451 1096772 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-874114/.minikube/certs/879497_empty.pem, impossibly tiny 0 bytes
	I0918 21:21:33.983491 1096772 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 21:21:33.983582 1096772 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:21:33.983657 1096772 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:21:33.983735 1096772 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/key.pem (1679 bytes)
	I0918 21:21:33.983836 1096772 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/files/etc/ssl/certs/8794972.pem (1708 bytes)
	I0918 21:21:33.985126 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:21:34.019807 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 21:21:34.049587 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:21:34.098123 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:21:34.121941 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 21:21:34.169364 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 21:21:34.228131 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:21:34.256614 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:21:34.284756 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/certs/879497.pem --> /usr/share/ca-certificates/879497.pem (1338 bytes)
	I0918 21:21:34.311081 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/files/etc/ssl/certs/8794972.pem --> /usr/share/ca-certificates/8794972.pem (1708 bytes)
	I0918 21:21:34.336496 1096772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:21:34.362894 1096772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:21:34.383060 1096772 ssh_runner.go:195] Run: openssl version
	I0918 21:21:34.392501 1096772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8794972.pem && ln -fs /usr/share/ca-certificates/8794972.pem /etc/ssl/certs/8794972.pem"
	I0918 21:21:34.402897 1096772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8794972.pem
	I0918 21:21:34.406799 1096772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 20:36 /usr/share/ca-certificates/8794972.pem
	I0918 21:21:34.406915 1096772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8794972.pem
	I0918 21:21:34.414043 1096772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8794972.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:21:34.423708 1096772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:21:34.433649 1096772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:21:34.437572 1096772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 20:26 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:21:34.437682 1096772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:21:34.444807 1096772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:21:34.455002 1096772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/879497.pem && ln -fs /usr/share/ca-certificates/879497.pem /etc/ssl/certs/879497.pem"
	I0918 21:21:34.465253 1096772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/879497.pem
	I0918 21:21:34.469044 1096772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 20:36 /usr/share/ca-certificates/879497.pem
	I0918 21:21:34.469153 1096772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/879497.pem
	I0918 21:21:34.476059 1096772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/879497.pem /etc/ssl/certs/51391683.0"
	I0918 21:21:34.486152 1096772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:21:34.490940 1096772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:21:34.497897 1096772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:21:34.505097 1096772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:21:34.512360 1096772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:21:34.519473 1096772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:21:34.526662 1096772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:21:34.533750 1096772 kubeadm.go:392] StartCluster: {Name:old-k8s-version-025914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-025914 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:21:34.533854 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0918 21:21:34.533979 1096772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:21:34.601751 1096772 cri.go:89] found id: "76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77"
	I0918 21:21:34.601783 1096772 cri.go:89] found id: "db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c"
	I0918 21:21:34.601788 1096772 cri.go:89] found id: "c5731fd6fdc298491104d56a10261e4f0be0a0156419837cbd8b5e73007d7ff8"
	I0918 21:21:34.601819 1096772 cri.go:89] found id: "724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890"
	I0918 21:21:34.601829 1096772 cri.go:89] found id: "654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3"
	I0918 21:21:34.601834 1096772 cri.go:89] found id: "0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca"
	I0918 21:21:34.601838 1096772 cri.go:89] found id: "e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f"
	I0918 21:21:34.601841 1096772 cri.go:89] found id: "85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155"
	I0918 21:21:34.601845 1096772 cri.go:89] found id: ""
	I0918 21:21:34.601913 1096772 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0918 21:21:34.618261 1096772 cri.go:116] JSON = null
	W0918 21:21:34.618328 1096772 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0918 21:21:34.618428 1096772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:21:34.629728 1096772 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:21:34.629746 1096772 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:21:34.629825 1096772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:21:34.642301 1096772 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:21:34.642793 1096772 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-025914" does not appear in /home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 21:21:34.642948 1096772 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-874114/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-025914" cluster setting kubeconfig missing "old-k8s-version-025914" context setting]
	I0918 21:21:34.643309 1096772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/kubeconfig: {Name:mke33cc40bb5f82b15bbe41884ab27179b9ca37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:21:34.644650 1096772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:21:34.653952 1096772 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0918 21:21:34.653983 1096772 kubeadm.go:597] duration metric: took 24.23134ms to restartPrimaryControlPlane
	I0918 21:21:34.653992 1096772 kubeadm.go:394] duration metric: took 120.25122ms to StartCluster
	I0918 21:21:34.654006 1096772 settings.go:142] acquiring lock: {Name:mk57bc44f9fec4b4923bac0bde72e24bb39c4097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:21:34.654082 1096772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 21:21:34.654706 1096772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/kubeconfig: {Name:mke33cc40bb5f82b15bbe41884ab27179b9ca37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:21:34.654954 1096772 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0918 21:21:34.655286 1096772 config.go:182] Loaded profile config "old-k8s-version-025914": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0918 21:21:34.655353 1096772 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:21:34.655457 1096772 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-025914"
	I0918 21:21:34.655476 1096772 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-025914"
	W0918 21:21:34.655488 1096772 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:21:34.655510 1096772 host.go:66] Checking if "old-k8s-version-025914" exists ...
	I0918 21:21:34.655987 1096772 cli_runner.go:164] Run: docker container inspect old-k8s-version-025914 --format={{.State.Status}}
	I0918 21:21:34.656190 1096772 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-025914"
	I0918 21:21:34.656218 1096772 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-025914"
	I0918 21:21:34.656498 1096772 cli_runner.go:164] Run: docker container inspect old-k8s-version-025914 --format={{.State.Status}}
	I0918 21:21:34.663468 1096772 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-025914"
	I0918 21:21:34.663516 1096772 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-025914"
	W0918 21:21:34.663548 1096772 addons.go:243] addon metrics-server should already be in state true
	I0918 21:21:34.663592 1096772 host.go:66] Checking if "old-k8s-version-025914" exists ...
	I0918 21:21:34.664106 1096772 cli_runner.go:164] Run: docker container inspect old-k8s-version-025914 --format={{.State.Status}}
	I0918 21:21:34.664399 1096772 addons.go:69] Setting dashboard=true in profile "old-k8s-version-025914"
	I0918 21:21:34.664423 1096772 addons.go:234] Setting addon dashboard=true in "old-k8s-version-025914"
	W0918 21:21:34.664430 1096772 addons.go:243] addon dashboard should already be in state true
	I0918 21:21:34.664460 1096772 host.go:66] Checking if "old-k8s-version-025914" exists ...
	I0918 21:21:34.664881 1096772 cli_runner.go:164] Run: docker container inspect old-k8s-version-025914 --format={{.State.Status}}
	I0918 21:21:34.676126 1096772 out.go:177] * Verifying Kubernetes components...
	I0918 21:21:34.678423 1096772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:21:34.741262 1096772 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:21:34.744451 1096772 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:21:34.744473 1096772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:21:34.744538 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:34.749301 1096772 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-025914"
	W0918 21:21:34.749318 1096772 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:21:34.749344 1096772 host.go:66] Checking if "old-k8s-version-025914" exists ...
	I0918 21:21:34.749772 1096772 cli_runner.go:164] Run: docker container inspect old-k8s-version-025914 --format={{.State.Status}}
	I0918 21:21:34.760138 1096772 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0918 21:21:34.762206 1096772 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:21:34.768130 1096772 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0918 21:21:34.768227 1096772 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:21:34.768239 1096772 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:21:34.768321 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:34.770339 1096772 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0918 21:21:34.770365 1096772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0918 21:21:34.770432 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:34.794708 1096772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34175 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/old-k8s-version-025914/id_rsa Username:docker}
	I0918 21:21:34.835640 1096772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34175 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/old-k8s-version-025914/id_rsa Username:docker}
	I0918 21:21:34.836178 1096772 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:21:34.836193 1096772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:21:34.836249 1096772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-025914
	I0918 21:21:34.864269 1096772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34175 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/old-k8s-version-025914/id_rsa Username:docker}
	I0918 21:21:34.868249 1096772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34175 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/old-k8s-version-025914/id_rsa Username:docker}
	I0918 21:21:34.931807 1096772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:21:34.997913 1096772 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-025914" to be "Ready" ...
	I0918 21:21:35.091630 1096772 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0918 21:21:35.091681 1096772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0918 21:21:35.095439 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:21:35.106791 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:21:35.151570 1096772 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0918 21:21:35.151592 1096772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0918 21:21:35.162827 1096772 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:21:35.162857 1096772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:21:35.237612 1096772 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0918 21:21:35.237700 1096772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0918 21:21:35.248949 1096772 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:21:35.249029 1096772 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:21:35.305421 1096772 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0918 21:21:35.305496 1096772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0918 21:21:35.357829 1096772 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0918 21:21:35.357922 1096772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0918 21:21:35.359006 1096772 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:21:35.359145 1096772 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:21:35.401405 1096772 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0918 21:21:35.401484 1096772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0918 21:21:35.407128 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0918 21:21:35.441639 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.441723 1096772 retry.go:31] will retry after 225.667415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.450632 1096772 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0918 21:21:35.450692 1096772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0918 21:21:35.511293 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.511402 1096772 retry.go:31] will retry after 313.94897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 21:21:35.539466 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.539546 1096772 retry.go:31] will retry after 207.525514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.544990 1096772 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0918 21:21:35.545061 1096772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0918 21:21:35.568234 1096772 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0918 21:21:35.568259 1096772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0918 21:21:35.587911 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0918 21:21:35.668111 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0918 21:21:35.681456 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.681488 1096772 retry.go:31] will retry after 215.143891ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.747818 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0918 21:21:35.774775 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.774811 1096772 retry.go:31] will retry after 263.922737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.826133 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0918 21:21:35.864393 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.864426 1096772 retry.go:31] will retry after 285.984687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.897758 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0918 21:21:35.963333 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:35.963371 1096772 retry.go:31] will retry after 376.038458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:36.039293 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0918 21:21:36.045550 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:36.045587 1096772 retry.go:31] will retry after 282.216133ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 21:21:36.139415 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:36.139449 1096772 retry.go:31] will retry after 843.01352ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:36.150746 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0918 21:21:36.235399 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:36.235435 1096772 retry.go:31] will retry after 706.203003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:36.328687 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0918 21:21:36.340103 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0918 21:21:36.475318 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:36.475399 1096772 retry.go:31] will retry after 309.835038ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 21:21:36.524799 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:36.524834 1096772 retry.go:31] will retry after 445.697234ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:36.785613 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0918 21:21:36.892726 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:36.892757 1096772 retry.go:31] will retry after 462.102189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:36.942061 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:21:36.971591 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:21:36.983060 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:21:36.998784 1096772 node_ready.go:53] error getting node "old-k8s-version-025914": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-025914": dial tcp 192.168.85.2:8443: connect: connection refused
	W0918 21:21:37.163456 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:37.163569 1096772 retry.go:31] will retry after 871.186586ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 21:21:37.189771 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:37.189885 1096772 retry.go:31] will retry after 1.143618983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 21:21:37.227353 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:37.227388 1096772 retry.go:31] will retry after 1.147187825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:37.355448 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0918 21:21:37.427243 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:37.427279 1096772 retry.go:31] will retry after 1.614218497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:38.035752 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0918 21:21:38.168835 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:38.168916 1096772 retry.go:31] will retry after 972.576934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:38.334280 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:21:38.375565 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0918 21:21:38.519551 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:38.519633 1096772 retry.go:31] will retry after 1.19467654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 21:21:38.543871 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:38.543954 1096772 retry.go:31] will retry after 1.560359462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:39.042093 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0918 21:21:39.142655 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0918 21:21:39.188704 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:39.188788 1096772 retry.go:31] will retry after 2.558688784s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 21:21:39.330718 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:39.330800 1096772 retry.go:31] will retry after 1.067909634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:39.499330 1096772 node_ready.go:53] error getting node "old-k8s-version-025914": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-025914": dial tcp 192.168.85.2:8443: connect: connection refused
	I0918 21:21:39.714532 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0918 21:21:39.824524 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:39.824626 1096772 retry.go:31] will retry after 2.380080425s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:40.104744 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0918 21:21:40.257081 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:40.257168 1096772 retry.go:31] will retry after 2.809094654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:40.399553 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0918 21:21:40.530170 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:40.530253 1096772 retry.go:31] will retry after 1.956526112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:41.748697 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0918 21:21:41.873927 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:41.874011 1096772 retry.go:31] will retry after 3.339652219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:41.999543 1096772 node_ready.go:53] error getting node "old-k8s-version-025914": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-025914": dial tcp 192.168.85.2:8443: connect: connection refused
	I0918 21:21:42.204986 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0918 21:21:42.311672 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:42.311760 1096772 retry.go:31] will retry after 1.799383591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:42.487848 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0918 21:21:42.655906 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:42.655987 1096772 retry.go:31] will retry after 5.643851932s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:43.066847 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0918 21:21:43.213477 1096772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:43.213514 1096772 retry.go:31] will retry after 2.89962471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 21:21:44.111758 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:21:45.214271 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0918 21:21:46.113823 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:21:48.300370 1096772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:21:53.925901 1096772 node_ready.go:49] node "old-k8s-version-025914" has status "Ready":"True"
	I0918 21:21:53.925932 1096772 node_ready.go:38] duration metric: took 18.927930509s for node "old-k8s-version-025914" to be "Ready" ...
	I0918 21:21:53.925943 1096772 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:21:54.380681 1096772 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-8jxxt" in "kube-system" namespace to be "Ready" ...
	I0918 21:21:54.542215 1096772 pod_ready.go:93] pod "coredns-74ff55c5b-8jxxt" in "kube-system" namespace has status "Ready":"True"
	I0918 21:21:54.542284 1096772 pod_ready.go:82] duration metric: took 161.537077ms for pod "coredns-74ff55c5b-8jxxt" in "kube-system" namespace to be "Ready" ...
	I0918 21:21:54.542298 1096772 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-025914" in "kube-system" namespace to be "Ready" ...
	I0918 21:21:54.569221 1096772 pod_ready.go:93] pod "etcd-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"True"
	I0918 21:21:54.569249 1096772 pod_ready.go:82] duration metric: took 26.942411ms for pod "etcd-old-k8s-version-025914" in "kube-system" namespace to be "Ready" ...
	I0918 21:21:54.569265 1096772 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-025914" in "kube-system" namespace to be "Ready" ...
	I0918 21:21:55.220573 1096772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.108771312s)
	I0918 21:21:55.593850 1096772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.379454552s)
	I0918 21:21:55.594080 1096772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.480167s)
	I0918 21:21:55.594203 1096772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.293801136s)
	I0918 21:21:55.594236 1096772 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-025914"
	I0918 21:21:55.597465 1096772 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-025914 addons enable metrics-server
	
	I0918 21:21:55.600465 1096772 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0918 21:21:55.603053 1096772 addons.go:510] duration metric: took 20.947693523s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0918 21:21:56.575079 1096772 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:21:58.576136 1096772 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:00.576993 1096772 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:02.077717 1096772 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"True"
	I0918 21:22:02.077746 1096772 pod_ready.go:82] duration metric: took 7.508463573s for pod "kube-apiserver-old-k8s-version-025914" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:02.077760 1096772 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:04.084933 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:06.085165 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:08.091526 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:10.583390 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:12.584530 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:14.585178 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:16.588303 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:18.594464 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:21.085468 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:23.085988 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:25.583967 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:27.584518 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:29.584757 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:32.087354 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:34.583990 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:36.584482 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:38.585452 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:41.087086 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:43.583288 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:45.583879 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:47.584005 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:49.584297 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:51.585024 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:53.585054 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:56.085819 1096772 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"True"
	I0918 21:22:56.085846 1096772 pod_ready.go:82] duration metric: took 54.008077221s for pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:56.085859 1096772 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gtz6t" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:56.095245 1096772 pod_ready.go:93] pod "kube-proxy-gtz6t" in "kube-system" namespace has status "Ready":"True"
	I0918 21:22:56.095271 1096772 pod_ready.go:82] duration metric: took 9.404099ms for pod "kube-proxy-gtz6t" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:56.095288 1096772 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:58.102579 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:00.166311 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:02.601254 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:05.105072 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:07.601512 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:09.602768 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:12.103103 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:13.101785 1096772 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"True"
	I0918 21:23:13.101813 1096772 pod_ready.go:82] duration metric: took 17.006517353s for pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace to be "Ready" ...
	I0918 21:23:13.101826 1096772 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace to be "Ready" ...
	I0918 21:23:15.110518 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:17.607133 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:19.608703 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:22.110047 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:24.607331 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:26.608568 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:29.108392 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:31.109872 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:33.608636 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:36.107501 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:38.108616 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:40.110093 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:42.607484 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:44.608168 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:46.608298 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:49.107716 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:51.113501 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:53.608404 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:56.108272 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:58.607923 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:00.611020 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:03.108929 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:05.109771 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:07.608668 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:09.608971 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:12.108683 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:14.638422 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:17.108068 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:19.108679 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:21.607535 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:23.608133 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:26.107445 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:28.109157 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:30.110258 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:32.608426 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:35.109127 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:37.607560 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:39.609783 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:42.111317 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:44.608598 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:47.108627 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:49.608213 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:52.109051 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:54.607487 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:56.608593 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:59.108181 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:01.109061 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:03.608636 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:06.107683 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:08.110547 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:10.609335 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:13.108218 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:15.130471 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:17.607342 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:19.608453 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:22.108055 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:24.109937 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:26.607444 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:28.607534 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:30.608291 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:33.108540 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:35.109640 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:37.607933 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:39.608384 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:42.125244 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:44.607391 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:46.607948 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:49.107687 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:51.110223 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:53.607778 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:56.107955 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:58.607210 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:00.608463 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:03.108557 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:05.108760 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:07.109757 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:09.608447 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:12.108605 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:14.608040 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:16.608132 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:18.608381 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:21.108125 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:23.109333 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:25.608568 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:28.108809 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:30.112155 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:32.609879 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:34.614885 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:37.107667 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:39.108977 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:41.607672 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:43.608503 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:45.608546 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:48.108036 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:50.110278 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:52.608465 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:55.110186 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:57.607401 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:59.608270 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:01.608361 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:04.108401 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:06.158568 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:08.610247 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:11.108978 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:13.110231 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:13.110264 1096772 pod_ready.go:82] duration metric: took 4m0.008430385s for pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace to be "Ready" ...
	E0918 21:27:13.110276 1096772 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0918 21:27:13.110284 1096772 pod_ready.go:39] duration metric: took 5m19.184329644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:27:13.110298 1096772 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:27:13.110331 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:27:13.110395 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:27:13.148685 1096772 cri.go:89] found id: "e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972"
	I0918 21:27:13.148711 1096772 cri.go:89] found id: "e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f"
	I0918 21:27:13.148728 1096772 cri.go:89] found id: ""
	I0918 21:27:13.148735 1096772 logs.go:276] 2 containers: [e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972 e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f]
	I0918 21:27:13.148793 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.152558 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.156017 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0918 21:27:13.156142 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:27:13.194545 1096772 cri.go:89] found id: "bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31"
	I0918 21:27:13.194568 1096772 cri.go:89] found id: "85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155"
	I0918 21:27:13.194573 1096772 cri.go:89] found id: ""
	I0918 21:27:13.194581 1096772 logs.go:276] 2 containers: [bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31 85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155]
	I0918 21:27:13.194640 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.198199 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.202224 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0918 21:27:13.202351 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:27:13.251899 1096772 cri.go:89] found id: "3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e"
	I0918 21:27:13.251920 1096772 cri.go:89] found id: "76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77"
	I0918 21:27:13.251925 1096772 cri.go:89] found id: ""
	I0918 21:27:13.251932 1096772 logs.go:276] 2 containers: [3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e 76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77]
	I0918 21:27:13.251994 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.255890 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.259521 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:27:13.259597 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:27:13.308887 1096772 cri.go:89] found id: "5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65"
	I0918 21:27:13.308908 1096772 cri.go:89] found id: "654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3"
	I0918 21:27:13.308913 1096772 cri.go:89] found id: ""
	I0918 21:27:13.308921 1096772 logs.go:276] 2 containers: [5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65 654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3]
	I0918 21:27:13.308984 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.312499 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.315800 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:27:13.315874 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:27:13.362479 1096772 cri.go:89] found id: "97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57"
	I0918 21:27:13.362503 1096772 cri.go:89] found id: "724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890"
	I0918 21:27:13.362509 1096772 cri.go:89] found id: ""
	I0918 21:27:13.362525 1096772 logs.go:276] 2 containers: [97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57 724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890]
	I0918 21:27:13.362625 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.366451 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.370153 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:27:13.370241 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:27:13.411494 1096772 cri.go:89] found id: "6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e"
	I0918 21:27:13.411520 1096772 cri.go:89] found id: "0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca"
	I0918 21:27:13.411526 1096772 cri.go:89] found id: ""
	I0918 21:27:13.411533 1096772 logs.go:276] 2 containers: [6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e 0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca]
	I0918 21:27:13.411591 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.415339 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.418757 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0918 21:27:13.418839 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:27:13.467492 1096772 cri.go:89] found id: "d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db"
	I0918 21:27:13.467516 1096772 cri.go:89] found id: "db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c"
	I0918 21:27:13.467521 1096772 cri.go:89] found id: ""
	I0918 21:27:13.467529 1096772 logs.go:276] 2 containers: [d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c]
	I0918 21:27:13.467589 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.471594 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.475548 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:27:13.475624 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:27:13.516183 1096772 cri.go:89] found id: "ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e"
	I0918 21:27:13.516211 1096772 cri.go:89] found id: "cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f"
	I0918 21:27:13.516217 1096772 cri.go:89] found id: ""
	I0918 21:27:13.516226 1096772 logs.go:276] 2 containers: [ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f]
	I0918 21:27:13.516288 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.520044 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.523799 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:27:13.523885 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:27:13.562638 1096772 cri.go:89] found id: "d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e"
	I0918 21:27:13.562700 1096772 cri.go:89] found id: ""
	I0918 21:27:13.562732 1096772 logs.go:276] 1 containers: [d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e]
	I0918 21:27:13.562819 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.566702 1096772 logs.go:123] Gathering logs for dmesg ...
	I0918 21:27:13.566777 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:27:13.584524 1096772 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:27:13.584556 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:27:13.731424 1096772 logs.go:123] Gathering logs for kube-apiserver [e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f] ...
	I0918 21:27:13.731457 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f"
	I0918 21:27:13.806786 1096772 logs.go:123] Gathering logs for etcd [bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31] ...
	I0918 21:27:13.806823 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31"
	I0918 21:27:13.861512 1096772 logs.go:123] Gathering logs for kube-controller-manager [6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e] ...
	I0918 21:27:13.861545 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e"
	I0918 21:27:13.920411 1096772 logs.go:123] Gathering logs for container status ...
	I0918 21:27:13.920451 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:27:13.965699 1096772 logs.go:123] Gathering logs for etcd [85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155] ...
	I0918 21:27:13.965731 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155"
	I0918 21:27:14.021328 1096772 logs.go:123] Gathering logs for kube-scheduler [5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65] ...
	I0918 21:27:14.021364 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65"
	I0918 21:27:14.068191 1096772 logs.go:123] Gathering logs for kindnet [d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db] ...
	I0918 21:27:14.068220 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db"
	I0918 21:27:14.132069 1096772 logs.go:123] Gathering logs for kubernetes-dashboard [d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e] ...
	I0918 21:27:14.132139 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e"
	I0918 21:27:14.174972 1096772 logs.go:123] Gathering logs for kube-apiserver [e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972] ...
	I0918 21:27:14.175006 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972"
	I0918 21:27:14.240583 1096772 logs.go:123] Gathering logs for coredns [76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77] ...
	I0918 21:27:14.240622 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77"
	I0918 21:27:14.286652 1096772 logs.go:123] Gathering logs for kube-scheduler [654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3] ...
	I0918 21:27:14.286685 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3"
	I0918 21:27:14.337176 1096772 logs.go:123] Gathering logs for kube-proxy [97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57] ...
	I0918 21:27:14.337214 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57"
	I0918 21:27:14.376811 1096772 logs.go:123] Gathering logs for kube-proxy [724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890] ...
	I0918 21:27:14.376901 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890"
	I0918 21:27:14.416199 1096772 logs.go:123] Gathering logs for kube-controller-manager [0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca] ...
	I0918 21:27:14.416229 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca"
	I0918 21:27:14.484345 1096772 logs.go:123] Gathering logs for storage-provisioner [cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f] ...
	I0918 21:27:14.484386 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f"
	I0918 21:27:14.523378 1096772 logs.go:123] Gathering logs for kubelet ...
	I0918 21:27:14.523407 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:27:14.585137 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:53 old-k8s-version-025914 kubelet[665]: E0918 21:21:53.794983     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-rqmbg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-rqmbg" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.585412 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:53 old-k8s-version-025914 kubelet[665]: E0918 21:21:53.796426     665 reflector.go:138] object-"kube-system"/"kindnet-token-xbssb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xbssb" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.589404 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022067     665 reflector.go:138] object-"kube-system"/"metrics-server-token-9b79x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9b79x" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.589616 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022444     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.589820 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022526     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.590060 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022568     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-n2hmt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-n2hmt" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.590275 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022715     665 reflector.go:138] object-"default"/"default-token-65brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-65brt" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.590491 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022772     665 reflector.go:138] object-"kube-system"/"coredns-token-jl4pr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jl4pr" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.598022 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:55 old-k8s-version-025914 kubelet[665]: E0918 21:21:55.759054     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:14.598215 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:55 old-k8s-version-025914 kubelet[665]: E0918 21:21:55.869815     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.603092 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:06 old-k8s-version-025914 kubelet[665]: E0918 21:22:06.685684     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:14.604813 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:19 old-k8s-version-025914 kubelet[665]: E0918 21:22:19.677495     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.605748 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:24 old-k8s-version-025914 kubelet[665]: E0918 21:22:24.020367     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.606081 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:25 old-k8s-version-025914 kubelet[665]: E0918 21:22:25.025956     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.606529 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:28 old-k8s-version-025914 kubelet[665]: E0918 21:22:28.035972     665 pod_workers.go:191] Error syncing pod a55c40ca-6e3f-4daa-907a-f52eb8fa9d41 ("storage-provisioner_kube-system(a55c40ca-6e3f-4daa-907a-f52eb8fa9d41)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a55c40ca-6e3f-4daa-907a-f52eb8fa9d41)"
	W0918 21:27:14.606859 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:30 old-k8s-version-025914 kubelet[665]: E0918 21:22:30.495050     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.609730 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:34 old-k8s-version-025914 kubelet[665]: E0918 21:22:34.683315     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:14.610505 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:45 old-k8s-version-025914 kubelet[665]: E0918 21:22:45.144685     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.610692 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:45 old-k8s-version-025914 kubelet[665]: E0918 21:22:45.670921     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.611025 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:50 old-k8s-version-025914 kubelet[665]: E0918 21:22:50.494917     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.611213 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:57 old-k8s-version-025914 kubelet[665]: E0918 21:22:57.675883     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.611805 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:07 old-k8s-version-025914 kubelet[665]: E0918 21:23:07.202210     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.611991 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:08 old-k8s-version-025914 kubelet[665]: E0918 21:23:08.671464     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.612332 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:10 old-k8s-version-025914 kubelet[665]: E0918 21:23:10.495097     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.614898 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:19 old-k8s-version-025914 kubelet[665]: E0918 21:23:19.683726     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:14.615233 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:20 old-k8s-version-025914 kubelet[665]: E0918 21:23:20.670558     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.615436 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:30 old-k8s-version-025914 kubelet[665]: E0918 21:23:30.671353     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.615776 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:35 old-k8s-version-025914 kubelet[665]: E0918 21:23:35.670703     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.615964 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:45 old-k8s-version-025914 kubelet[665]: E0918 21:23:45.670861     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.616600 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:47 old-k8s-version-025914 kubelet[665]: E0918 21:23:47.312536     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.616941 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:50 old-k8s-version-025914 kubelet[665]: E0918 21:23:50.495433     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.617131 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:58 old-k8s-version-025914 kubelet[665]: E0918 21:23:58.670952     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.617560 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:02 old-k8s-version-025914 kubelet[665]: E0918 21:24:02.670893     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.617752 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:11 old-k8s-version-025914 kubelet[665]: E0918 21:24:11.670937     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.618091 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:15 old-k8s-version-025914 kubelet[665]: E0918 21:24:15.670558     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.618284 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:22 old-k8s-version-025914 kubelet[665]: E0918 21:24:22.672129     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.618616 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:31 old-k8s-version-025914 kubelet[665]: E0918 21:24:31.670774     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.618804 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:35 old-k8s-version-025914 kubelet[665]: E0918 21:24:35.671357     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.619136 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:45 old-k8s-version-025914 kubelet[665]: E0918 21:24:45.670584     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.621620 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:46 old-k8s-version-025914 kubelet[665]: E0918 21:24:46.681946     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:14.621812 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:59 old-k8s-version-025914 kubelet[665]: E0918 21:24:59.670935     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.622143 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:00 old-k8s-version-025914 kubelet[665]: E0918 21:25:00.670943     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.622333 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:10 old-k8s-version-025914 kubelet[665]: E0918 21:25:10.677379     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.622945 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:16 old-k8s-version-025914 kubelet[665]: E0918 21:25:16.556333     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.623276 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:20 old-k8s-version-025914 kubelet[665]: E0918 21:25:20.495478     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.623464 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:24 old-k8s-version-025914 kubelet[665]: E0918 21:25:24.671063     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.623793 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:32 old-k8s-version-025914 kubelet[665]: E0918 21:25:32.671331     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.623982 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:35 old-k8s-version-025914 kubelet[665]: E0918 21:25:35.670802     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.624317 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:46 old-k8s-version-025914 kubelet[665]: E0918 21:25:46.670903     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.624511 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:46 old-k8s-version-025914 kubelet[665]: E0918 21:25:46.674707     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.624699 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:58 old-k8s-version-025914 kubelet[665]: E0918 21:25:58.670888     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.625030 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:01 old-k8s-version-025914 kubelet[665]: E0918 21:26:01.670449     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.625359 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:12 old-k8s-version-025914 kubelet[665]: E0918 21:26:12.672485     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.625547 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:12 old-k8s-version-025914 kubelet[665]: E0918 21:26:12.672863     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.625887 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:24 old-k8s-version-025914 kubelet[665]: E0918 21:26:24.671138     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.626074 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:25 old-k8s-version-025914 kubelet[665]: E0918 21:26:25.670938     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.626404 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:35 old-k8s-version-025914 kubelet[665]: E0918 21:26:35.670522     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.626591 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:37 old-k8s-version-025914 kubelet[665]: E0918 21:26:37.670947     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.626922 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:48 old-k8s-version-025914 kubelet[665]: E0918 21:26:48.671182     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.627108 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:49 old-k8s-version-025914 kubelet[665]: E0918 21:26:49.670964     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.627437 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: E0918 21:27:00.671211     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.627623 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:03 old-k8s-version-025914 kubelet[665]: E0918 21:27:03.670945     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 21:27:14.627633 1096772 logs.go:123] Gathering logs for coredns [3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e] ...
	I0918 21:27:14.627680 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e"
	I0918 21:27:14.667217 1096772 logs.go:123] Gathering logs for kindnet [db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c] ...
	I0918 21:27:14.667296 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c"
	I0918 21:27:14.721198 1096772 logs.go:123] Gathering logs for storage-provisioner [ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e] ...
	I0918 21:27:14.721230 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e"
	I0918 21:27:14.767188 1096772 logs.go:123] Gathering logs for containerd ...
	I0918 21:27:14.767217 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0918 21:27:14.830243 1096772 out.go:358] Setting ErrFile to fd 2...
	I0918 21:27:14.830276 1096772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0918 21:27:14.830354 1096772 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0918 21:27:14.830369 1096772 out.go:270]   Sep 18 21:26:37 old-k8s-version-025914 kubelet[665]: E0918 21:26:37.670947     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 21:26:37 old-k8s-version-025914 kubelet[665]: E0918 21:26:37.670947     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.830424 1096772 out.go:270]   Sep 18 21:26:48 old-k8s-version-025914 kubelet[665]: E0918 21:26:48.671182     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	  Sep 18 21:26:48 old-k8s-version-025914 kubelet[665]: E0918 21:26:48.671182     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.830433 1096772 out.go:270]   Sep 18 21:26:49 old-k8s-version-025914 kubelet[665]: E0918 21:26:49.670964     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 21:26:49 old-k8s-version-025914 kubelet[665]: E0918 21:26:49.670964     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.830447 1096772 out.go:270]   Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: E0918 21:27:00.671211     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	  Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: E0918 21:27:00.671211     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.830453 1096772 out.go:270]   Sep 18 21:27:03 old-k8s-version-025914 kubelet[665]: E0918 21:27:03.670945     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 21:27:03 old-k8s-version-025914 kubelet[665]: E0918 21:27:03.670945     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 21:27:14.830464 1096772 out.go:358] Setting ErrFile to fd 2...
	I0918 21:27:14.830473 1096772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:27:24.831742 1096772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:27:24.843745 1096772 api_server.go:72] duration metric: took 5m50.188751864s to wait for apiserver process to appear ...
	I0918 21:27:24.843777 1096772 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:27:24.843814 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:27:24.843875 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:27:24.889507 1096772 cri.go:89] found id: "e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972"
	I0918 21:27:24.889529 1096772 cri.go:89] found id: "e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f"
	I0918 21:27:24.889535 1096772 cri.go:89] found id: ""
	I0918 21:27:24.889543 1096772 logs.go:276] 2 containers: [e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972 e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f]
	I0918 21:27:24.889599 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:24.893345 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:24.896756 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0918 21:27:24.896831 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:27:24.966674 1096772 cri.go:89] found id: "bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31"
	I0918 21:27:24.966693 1096772 cri.go:89] found id: "85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155"
	I0918 21:27:24.966698 1096772 cri.go:89] found id: ""
	I0918 21:27:24.966705 1096772 logs.go:276] 2 containers: [bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31 85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155]
	I0918 21:27:24.966760 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:24.971029 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:24.975788 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0918 21:27:24.975857 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:27:25.025808 1096772 cri.go:89] found id: "3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e"
	I0918 21:27:25.025828 1096772 cri.go:89] found id: "76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77"
	I0918 21:27:25.025833 1096772 cri.go:89] found id: ""
	I0918 21:27:25.025840 1096772 logs.go:276] 2 containers: [3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e 76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77]
	I0918 21:27:25.025907 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.029877 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.033747 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:27:25.033840 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:27:25.073121 1096772 cri.go:89] found id: "5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65"
	I0918 21:27:25.073143 1096772 cri.go:89] found id: "654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3"
	I0918 21:27:25.073148 1096772 cri.go:89] found id: ""
	I0918 21:27:25.073155 1096772 logs.go:276] 2 containers: [5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65 654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3]
	I0918 21:27:25.073213 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.077015 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.080879 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:27:25.080972 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:27:25.120125 1096772 cri.go:89] found id: "97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57"
	I0918 21:27:25.120205 1096772 cri.go:89] found id: "724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890"
	I0918 21:27:25.120227 1096772 cri.go:89] found id: ""
	I0918 21:27:25.120269 1096772 logs.go:276] 2 containers: [97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57 724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890]
	I0918 21:27:25.120352 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.124126 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.127947 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:27:25.128026 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:27:25.167592 1096772 cri.go:89] found id: "6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e"
	I0918 21:27:25.167617 1096772 cri.go:89] found id: "0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca"
	I0918 21:27:25.167623 1096772 cri.go:89] found id: ""
	I0918 21:27:25.167630 1096772 logs.go:276] 2 containers: [6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e 0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca]
	I0918 21:27:25.167689 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.171303 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.174797 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0918 21:27:25.174881 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:27:25.220150 1096772 cri.go:89] found id: "d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db"
	I0918 21:27:25.220171 1096772 cri.go:89] found id: "db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c"
	I0918 21:27:25.220176 1096772 cri.go:89] found id: ""
	I0918 21:27:25.220183 1096772 logs.go:276] 2 containers: [d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c]
	I0918 21:27:25.220239 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.223726 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.227235 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:27:25.227307 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:27:25.264555 1096772 cri.go:89] found id: "ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e"
	I0918 21:27:25.264577 1096772 cri.go:89] found id: "cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f"
	I0918 21:27:25.264583 1096772 cri.go:89] found id: ""
	I0918 21:27:25.264590 1096772 logs.go:276] 2 containers: [ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f]
	I0918 21:27:25.264648 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.268036 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.271313 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:27:25.271383 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:27:25.314052 1096772 cri.go:89] found id: "d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e"
	I0918 21:27:25.314075 1096772 cri.go:89] found id: ""
	I0918 21:27:25.314083 1096772 logs.go:276] 1 containers: [d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e]
	I0918 21:27:25.314166 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.317824 1096772 logs.go:123] Gathering logs for kube-proxy [724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890] ...
	I0918 21:27:25.317851 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890"
	I0918 21:27:25.356179 1096772 logs.go:123] Gathering logs for storage-provisioner [cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f] ...
	I0918 21:27:25.356211 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f"
	I0918 21:27:25.392458 1096772 logs.go:123] Gathering logs for container status ...
	I0918 21:27:25.392487 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:27:25.448007 1096772 logs.go:123] Gathering logs for dmesg ...
	I0918 21:27:25.448048 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:27:25.465593 1096772 logs.go:123] Gathering logs for kube-apiserver [e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f] ...
	I0918 21:27:25.465665 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f"
	I0918 21:27:25.540704 1096772 logs.go:123] Gathering logs for kindnet [db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c] ...
	I0918 21:27:25.540738 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c"
	I0918 21:27:25.582760 1096772 logs.go:123] Gathering logs for kube-apiserver [e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972] ...
	I0918 21:27:25.582787 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972"
	I0918 21:27:25.651789 1096772 logs.go:123] Gathering logs for etcd [bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31] ...
	I0918 21:27:25.651823 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31"
	I0918 21:27:25.698093 1096772 logs.go:123] Gathering logs for kube-controller-manager [0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca] ...
	I0918 21:27:25.698142 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca"
	I0918 21:27:25.774902 1096772 logs.go:123] Gathering logs for kubernetes-dashboard [d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e] ...
	I0918 21:27:25.774938 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e"
	I0918 21:27:25.824126 1096772 logs.go:123] Gathering logs for containerd ...
	I0918 21:27:25.824155 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0918 21:27:25.883310 1096772 logs.go:123] Gathering logs for kube-scheduler [654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3] ...
	I0918 21:27:25.883344 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3"
	I0918 21:27:25.943284 1096772 logs.go:123] Gathering logs for kube-controller-manager [6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e] ...
	I0918 21:27:25.943316 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e"
	I0918 21:27:26.006550 1096772 logs.go:123] Gathering logs for etcd [85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155] ...
	I0918 21:27:26.006599 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155"
	I0918 21:27:26.062095 1096772 logs.go:123] Gathering logs for coredns [3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e] ...
	I0918 21:27:26.062126 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e"
	I0918 21:27:26.106172 1096772 logs.go:123] Gathering logs for coredns [76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77] ...
	I0918 21:27:26.106202 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77"
	I0918 21:27:26.145634 1096772 logs.go:123] Gathering logs for kube-scheduler [5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65] ...
	I0918 21:27:26.145706 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65"
	I0918 21:27:26.186822 1096772 logs.go:123] Gathering logs for kube-proxy [97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57] ...
	I0918 21:27:26.186853 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57"
	I0918 21:27:26.237319 1096772 logs.go:123] Gathering logs for kindnet [d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db] ...
	I0918 21:27:26.237346 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db"
	I0918 21:27:26.302566 1096772 logs.go:123] Gathering logs for kubelet ...
	I0918 21:27:26.302596 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:27:26.370931 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:53 old-k8s-version-025914 kubelet[665]: E0918 21:21:53.794983     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-rqmbg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-rqmbg" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.371224 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:53 old-k8s-version-025914 kubelet[665]: E0918 21:21:53.796426     665 reflector.go:138] object-"kube-system"/"kindnet-token-xbssb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xbssb" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.375331 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022067     665 reflector.go:138] object-"kube-system"/"metrics-server-token-9b79x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9b79x" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.375546 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022444     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.375747 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022526     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.375974 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022568     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-n2hmt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-n2hmt" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.376196 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022715     665 reflector.go:138] object-"default"/"default-token-65brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-65brt" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.376409 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022772     665 reflector.go:138] object-"kube-system"/"coredns-token-jl4pr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jl4pr" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.384238 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:55 old-k8s-version-025914 kubelet[665]: E0918 21:21:55.759054     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:26.384501 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:55 old-k8s-version-025914 kubelet[665]: E0918 21:21:55.869815     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.389366 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:06 old-k8s-version-025914 kubelet[665]: E0918 21:22:06.685684     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:26.391096 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:19 old-k8s-version-025914 kubelet[665]: E0918 21:22:19.677495     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.392101 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:24 old-k8s-version-025914 kubelet[665]: E0918 21:22:24.020367     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.392556 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:25 old-k8s-version-025914 kubelet[665]: E0918 21:22:25.025956     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.393045 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:28 old-k8s-version-025914 kubelet[665]: E0918 21:22:28.035972     665 pod_workers.go:191] Error syncing pod a55c40ca-6e3f-4daa-907a-f52eb8fa9d41 ("storage-provisioner_kube-system(a55c40ca-6e3f-4daa-907a-f52eb8fa9d41)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a55c40ca-6e3f-4daa-907a-f52eb8fa9d41)"
	W0918 21:27:26.393468 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:30 old-k8s-version-025914 kubelet[665]: E0918 21:22:30.495050     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.396385 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:34 old-k8s-version-025914 kubelet[665]: E0918 21:22:34.683315     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:26.397163 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:45 old-k8s-version-025914 kubelet[665]: E0918 21:22:45.144685     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.397375 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:45 old-k8s-version-025914 kubelet[665]: E0918 21:22:45.670921     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.397735 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:50 old-k8s-version-025914 kubelet[665]: E0918 21:22:50.494917     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.397946 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:57 old-k8s-version-025914 kubelet[665]: E0918 21:22:57.675883     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.398560 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:07 old-k8s-version-025914 kubelet[665]: E0918 21:23:07.202210     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.398771 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:08 old-k8s-version-025914 kubelet[665]: E0918 21:23:08.671464     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.399128 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:10 old-k8s-version-025914 kubelet[665]: E0918 21:23:10.495097     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.401614 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:19 old-k8s-version-025914 kubelet[665]: E0918 21:23:19.683726     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:26.401974 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:20 old-k8s-version-025914 kubelet[665]: E0918 21:23:20.670558     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.402186 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:30 old-k8s-version-025914 kubelet[665]: E0918 21:23:30.671353     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.402546 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:35 old-k8s-version-025914 kubelet[665]: E0918 21:23:35.670703     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.402763 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:45 old-k8s-version-025914 kubelet[665]: E0918 21:23:45.670861     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.403379 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:47 old-k8s-version-025914 kubelet[665]: E0918 21:23:47.312536     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.403738 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:50 old-k8s-version-025914 kubelet[665]: E0918 21:23:50.495433     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.403949 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:58 old-k8s-version-025914 kubelet[665]: E0918 21:23:58.670952     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.404310 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:02 old-k8s-version-025914 kubelet[665]: E0918 21:24:02.670893     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.404538 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:11 old-k8s-version-025914 kubelet[665]: E0918 21:24:11.670937     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.404891 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:15 old-k8s-version-025914 kubelet[665]: E0918 21:24:15.670558     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.405103 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:22 old-k8s-version-025914 kubelet[665]: E0918 21:24:22.672129     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.405478 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:31 old-k8s-version-025914 kubelet[665]: E0918 21:24:31.670774     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.405696 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:35 old-k8s-version-025914 kubelet[665]: E0918 21:24:35.671357     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.406054 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:45 old-k8s-version-025914 kubelet[665]: E0918 21:24:45.670584     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.408562 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:46 old-k8s-version-025914 kubelet[665]: E0918 21:24:46.681946     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:26.408792 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:59 old-k8s-version-025914 kubelet[665]: E0918 21:24:59.670935     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.409167 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:00 old-k8s-version-025914 kubelet[665]: E0918 21:25:00.670943     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.409379 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:10 old-k8s-version-025914 kubelet[665]: E0918 21:25:10.677379     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.409998 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:16 old-k8s-version-025914 kubelet[665]: E0918 21:25:16.556333     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.410365 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:20 old-k8s-version-025914 kubelet[665]: E0918 21:25:20.495478     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.410581 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:24 old-k8s-version-025914 kubelet[665]: E0918 21:25:24.671063     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.410935 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:32 old-k8s-version-025914 kubelet[665]: E0918 21:25:32.671331     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.411163 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:35 old-k8s-version-025914 kubelet[665]: E0918 21:25:35.670802     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.411520 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:46 old-k8s-version-025914 kubelet[665]: E0918 21:25:46.670903     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.411732 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:46 old-k8s-version-025914 kubelet[665]: E0918 21:25:46.674707     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.411944 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:58 old-k8s-version-025914 kubelet[665]: E0918 21:25:58.670888     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.412310 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:01 old-k8s-version-025914 kubelet[665]: E0918 21:26:01.670449     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.412670 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:12 old-k8s-version-025914 kubelet[665]: E0918 21:26:12.672485     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.412880 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:12 old-k8s-version-025914 kubelet[665]: E0918 21:26:12.672863     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.413263 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:24 old-k8s-version-025914 kubelet[665]: E0918 21:26:24.671138     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.413473 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:25 old-k8s-version-025914 kubelet[665]: E0918 21:26:25.670938     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.413828 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:35 old-k8s-version-025914 kubelet[665]: E0918 21:26:35.670522     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.414073 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:37 old-k8s-version-025914 kubelet[665]: E0918 21:26:37.670947     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.414471 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:48 old-k8s-version-025914 kubelet[665]: E0918 21:26:48.671182     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.414732 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:49 old-k8s-version-025914 kubelet[665]: E0918 21:26:49.670964     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.415107 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: E0918 21:27:00.671211     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.415336 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:03 old-k8s-version-025914 kubelet[665]: E0918 21:27:03.670945     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.415692 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:14 old-k8s-version-025914 kubelet[665]: E0918 21:27:14.674590     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.415902 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:15 old-k8s-version-025914 kubelet[665]: E0918 21:27:15.670877     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 21:27:26.415929 1096772 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:27:26.415962 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:27:26.568040 1096772 logs.go:123] Gathering logs for storage-provisioner [ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e] ...
	I0918 21:27:26.568072 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e"
	I0918 21:27:26.609865 1096772 out.go:358] Setting ErrFile to fd 2...
	I0918 21:27:26.609892 1096772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0918 21:27:26.609939 1096772 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0918 21:27:26.609954 1096772 out.go:270]   Sep 18 21:26:49 old-k8s-version-025914 kubelet[665]: E0918 21:26:49.670964     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 21:26:49 old-k8s-version-025914 kubelet[665]: E0918 21:26:49.670964     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.609962 1096772 out.go:270]   Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: E0918 21:27:00.671211     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	  Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: E0918 21:27:00.671211     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.609970 1096772 out.go:270]   Sep 18 21:27:03 old-k8s-version-025914 kubelet[665]: E0918 21:27:03.670945     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 21:27:03 old-k8s-version-025914 kubelet[665]: E0918 21:27:03.670945     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.609979 1096772 out.go:270]   Sep 18 21:27:14 old-k8s-version-025914 kubelet[665]: E0918 21:27:14.674590     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	  Sep 18 21:27:14 old-k8s-version-025914 kubelet[665]: E0918 21:27:14.674590     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.609991 1096772 out.go:270]   Sep 18 21:27:15 old-k8s-version-025914 kubelet[665]: E0918 21:27:15.670877     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 21:27:15 old-k8s-version-025914 kubelet[665]: E0918 21:27:15.670877     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 21:27:26.609996 1096772 out.go:358] Setting ErrFile to fd 2...
	I0918 21:27:26.610002 1096772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:27:36.611419 1096772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0918 21:27:36.628302 1096772 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0918 21:27:36.640948 1096772 out.go:201] 
	W0918 21:27:36.644770 1096772 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0918 21:27:36.644810 1096772 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0918 21:27:36.644833 1096772 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0918 21:27:36.644841 1096772 out.go:270] * 
	* 
	W0918 21:27:36.645761 1096772 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 21:27:36.649040 1096772 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-025914 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-025914
helpers_test.go:235: (dbg) docker inspect old-k8s-version-025914:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13c6dd4cf82cd677a8817809d4a56ad2d66840e32708e4baa181ee801ead2f9c",
	        "Created": "2024-09-18T21:18:31.035307618Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1096985,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-18T21:21:27.224432258Z",
	            "FinishedAt": "2024-09-18T21:21:25.51080427Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/13c6dd4cf82cd677a8817809d4a56ad2d66840e32708e4baa181ee801ead2f9c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13c6dd4cf82cd677a8817809d4a56ad2d66840e32708e4baa181ee801ead2f9c/hostname",
	        "HostsPath": "/var/lib/docker/containers/13c6dd4cf82cd677a8817809d4a56ad2d66840e32708e4baa181ee801ead2f9c/hosts",
	        "LogPath": "/var/lib/docker/containers/13c6dd4cf82cd677a8817809d4a56ad2d66840e32708e4baa181ee801ead2f9c/13c6dd4cf82cd677a8817809d4a56ad2d66840e32708e4baa181ee801ead2f9c-json.log",
	        "Name": "/old-k8s-version-025914",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-025914:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-025914",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ce6afa0d72a1bf39211418261941dbca7e2ce235e4ba1743399b3ec6f5eecf12-init/diff:/var/lib/docker/overlay2/e15030a03ca75c521300a5809bba283a333356a542417dabfffce840b03425c4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce6afa0d72a1bf39211418261941dbca7e2ce235e4ba1743399b3ec6f5eecf12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce6afa0d72a1bf39211418261941dbca7e2ce235e4ba1743399b3ec6f5eecf12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce6afa0d72a1bf39211418261941dbca7e2ce235e4ba1743399b3ec6f5eecf12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-025914",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-025914/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-025914",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-025914",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-025914",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0899ce7fc31f88d63cc0c6501139224f031eb287952190b77cfdd0c59146a5a7",
	            "SandboxKey": "/var/run/docker/netns/0899ce7fc31f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34175"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34176"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34179"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34177"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34178"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-025914": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "40dfc13f772f6b04597c1a6a3e3173b97dd3338b5c8a05e5c45460e0d70901fe",
	                    "EndpointID": "55e3f0879f065dc1e9271ae9018e076dd578cebf9e49e73fd130bd7a1fb31f54",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-025914",
	                        "13c6dd4cf82c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-025914 -n old-k8s-version-025914
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-025914 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-025914 logs -n 25: (2.644862408s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-979291                                        | pause-979291             | jenkins | v1.34.0 | 18 Sep 24 21:17 UTC | 18 Sep 24 21:17 UTC |
	| start   | -p cert-expiration-033085                              | cert-expiration-033085   | jenkins | v1.34.0 | 18 Sep 24 21:17 UTC | 18 Sep 24 21:17 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-242864                               | force-systemd-env-242864 | jenkins | v1.34.0 | 18 Sep 24 21:17 UTC | 18 Sep 24 21:17 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-242864                            | force-systemd-env-242864 | jenkins | v1.34.0 | 18 Sep 24 21:17 UTC | 18 Sep 24 21:17 UTC |
	| start   | -p cert-options-106250                                 | cert-options-106250      | jenkins | v1.34.0 | 18 Sep 24 21:17 UTC | 18 Sep 24 21:18 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-106250 ssh                                | cert-options-106250      | jenkins | v1.34.0 | 18 Sep 24 21:18 UTC | 18 Sep 24 21:18 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-106250 -- sudo                         | cert-options-106250      | jenkins | v1.34.0 | 18 Sep 24 21:18 UTC | 18 Sep 24 21:18 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-106250                                 | cert-options-106250      | jenkins | v1.34.0 | 18 Sep 24 21:18 UTC | 18 Sep 24 21:18 UTC |
	| start   | -p old-k8s-version-025914                              | old-k8s-version-025914   | jenkins | v1.34.0 | 18 Sep 24 21:18 UTC | 18 Sep 24 21:20 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-033085                              | cert-expiration-033085   | jenkins | v1.34.0 | 18 Sep 24 21:20 UTC | 18 Sep 24 21:21 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-033085                              | cert-expiration-033085   | jenkins | v1.34.0 | 18 Sep 24 21:21 UTC | 18 Sep 24 21:21 UTC |
	| start   | -p no-preload-460226                                   | no-preload-460226        | jenkins | v1.34.0 | 18 Sep 24 21:21 UTC | 18 Sep 24 21:22 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-025914        | old-k8s-version-025914   | jenkins | v1.34.0 | 18 Sep 24 21:21 UTC | 18 Sep 24 21:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-025914                              | old-k8s-version-025914   | jenkins | v1.34.0 | 18 Sep 24 21:21 UTC | 18 Sep 24 21:21 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-025914             | old-k8s-version-025914   | jenkins | v1.34.0 | 18 Sep 24 21:21 UTC | 18 Sep 24 21:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-025914                              | old-k8s-version-025914   | jenkins | v1.34.0 | 18 Sep 24 21:21 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-460226             | no-preload-460226        | jenkins | v1.34.0 | 18 Sep 24 21:22 UTC | 18 Sep 24 21:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-460226                                   | no-preload-460226        | jenkins | v1.34.0 | 18 Sep 24 21:22 UTC | 18 Sep 24 21:22 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-460226                  | no-preload-460226        | jenkins | v1.34.0 | 18 Sep 24 21:22 UTC | 18 Sep 24 21:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-460226                                   | no-preload-460226        | jenkins | v1.34.0 | 18 Sep 24 21:22 UTC | 18 Sep 24 21:27 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| image   | no-preload-460226 image list                           | no-preload-460226        | jenkins | v1.34.0 | 18 Sep 24 21:27 UTC | 18 Sep 24 21:27 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-460226                                   | no-preload-460226        | jenkins | v1.34.0 | 18 Sep 24 21:27 UTC | 18 Sep 24 21:27 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-460226                                   | no-preload-460226        | jenkins | v1.34.0 | 18 Sep 24 21:27 UTC | 18 Sep 24 21:27 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-460226                                   | no-preload-460226        | jenkins | v1.34.0 | 18 Sep 24 21:27 UTC | 18 Sep 24 21:27 UTC |
	| delete  | -p no-preload-460226                                   | no-preload-460226        | jenkins | v1.34.0 | 18 Sep 24 21:27 UTC |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 21:22:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 21:22:30.881319 1101836 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:22:30.881498 1101836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:22:30.881529 1101836 out.go:358] Setting ErrFile to fd 2...
	I0918 21:22:30.881552 1101836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:22:30.881801 1101836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	I0918 21:22:30.882188 1101836 out.go:352] Setting JSON to false
	I0918 21:22:30.883282 1101836 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18299,"bootTime":1726676252,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0918 21:22:30.883374 1101836 start.go:139] virtualization:  
	I0918 21:22:30.885959 1101836 out.go:177] * [no-preload-460226] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0918 21:22:30.888317 1101836 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:22:30.888403 1101836 notify.go:220] Checking for updates...
	I0918 21:22:30.891811 1101836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:22:30.893543 1101836 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 21:22:30.895827 1101836 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	I0918 21:22:30.898065 1101836 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 21:22:30.900149 1101836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:22:30.902670 1101836 config.go:182] Loaded profile config "no-preload-460226": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 21:22:30.903206 1101836 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:22:30.933768 1101836 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 21:22:30.935135 1101836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 21:22:30.995124 1101836 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-18 21:22:30.985132979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 21:22:30.995237 1101836 docker.go:318] overlay module found
	I0918 21:22:30.997592 1101836 out.go:177] * Using the docker driver based on existing profile
	I0918 21:22:30.999191 1101836 start.go:297] selected driver: docker
	I0918 21:22:30.999216 1101836 start.go:901] validating driver "docker" against &{Name:no-preload-460226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-460226 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:22:30.999352 1101836 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:22:31.000031 1101836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 21:22:31.058612 1101836 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-18 21:22:31.047047334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 21:22:31.059205 1101836 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:22:31.059239 1101836 cni.go:84] Creating CNI manager for ""
	I0918 21:22:31.059292 1101836 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0918 21:22:31.059337 1101836 start.go:340] cluster config:
	{Name:no-preload-460226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-460226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:22:31.063183 1101836 out.go:177] * Starting "no-preload-460226" primary control-plane node in "no-preload-460226" cluster
	I0918 21:22:31.065064 1101836 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0918 21:22:31.067206 1101836 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0918 21:22:31.069369 1101836 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0918 21:22:31.069492 1101836 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0918 21:22:31.069678 1101836 cache.go:107] acquiring lock: {Name:mk1ed9cd4740cde0d5d74fe96d4bb401c543488d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:22:31.069775 1101836 cache.go:115] /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0918 21:22:31.069787 1101836 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 113.853µs
	I0918 21:22:31.069801 1101836 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0918 21:22:31.069812 1101836 cache.go:107] acquiring lock: {Name:mk729c0143d2115bf9a9da7029b612aa8ea382d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:22:31.069843 1101836 cache.go:115] /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0918 21:22:31.069849 1101836 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 37.982µs
	I0918 21:22:31.069856 1101836 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0918 21:22:31.069865 1101836 cache.go:107] acquiring lock: {Name:mk4920d1782646b40f50f3e94572d8eca02ea23f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:22:31.069900 1101836 cache.go:115] /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0918 21:22:31.069914 1101836 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 49.641µs
	I0918 21:22:31.069926 1101836 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0918 21:22:31.069936 1101836 cache.go:107] acquiring lock: {Name:mk88893a5178e8fd9ec2cf31c6749a968aeefc1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:22:31.069970 1101836 cache.go:115] /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0918 21:22:31.069981 1101836 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 45.792µs
	I0918 21:22:31.069987 1101836 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0918 21:22:31.069995 1101836 cache.go:107] acquiring lock: {Name:mk289243551f9d743710377610342b4c7b0f06c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:22:31.070021 1101836 cache.go:115] /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0918 21:22:31.070026 1101836 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 32.066µs
	I0918 21:22:31.070044 1101836 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0918 21:22:31.070056 1101836 cache.go:107] acquiring lock: {Name:mk69da64dc8b711da0dfc24dfb5e1e66e8c1a02b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:22:31.070088 1101836 cache.go:115] /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0918 21:22:31.070098 1101836 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 43.167µs
	I0918 21:22:31.070104 1101836 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0918 21:22:31.070113 1101836 cache.go:107] acquiring lock: {Name:mka2078efea9dc248896f986b5e8e83944459cfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:22:31.070144 1101836 cache.go:115] /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0918 21:22:31.070153 1101836 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 40.977µs
	I0918 21:22:31.070159 1101836 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0918 21:22:31.070167 1101836 cache.go:107] acquiring lock: {Name:mkf771288b7a24b4051cbdeb96acaa5fbe4f527c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:22:31.070195 1101836 cache.go:115] /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0918 21:22:31.070205 1101836 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 39.228µs
	I0918 21:22:31.070212 1101836 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19667-874114/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0918 21:22:31.070218 1101836 cache.go:87] Successfully saved all images to host disk.
	I0918 21:22:31.069533 1101836 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/config.json ...
	W0918 21:22:31.099052 1101836 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0918 21:22:31.099076 1101836 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 21:22:31.099175 1101836 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 21:22:31.099212 1101836 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0918 21:22:31.099218 1101836 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0918 21:22:31.099227 1101836 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0918 21:22:31.099235 1101836 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0918 21:22:31.272342 1101836 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0918 21:22:31.272383 1101836 cache.go:194] Successfully downloaded all kic artifacts
	I0918 21:22:31.272414 1101836 start.go:360] acquireMachinesLock for no-preload-460226: {Name:mk9a89cb21bc07b00ae4d7e804f4d1a060ef083d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:22:31.272488 1101836 start.go:364] duration metric: took 53.883µs to acquireMachinesLock for "no-preload-460226"
	I0918 21:22:31.272515 1101836 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:22:31.272522 1101836 fix.go:54] fixHost starting: 
	I0918 21:22:31.272815 1101836 cli_runner.go:164] Run: docker container inspect no-preload-460226 --format={{.State.Status}}
	I0918 21:22:31.289520 1101836 fix.go:112] recreateIfNeeded on no-preload-460226: state=Stopped err=<nil>
	W0918 21:22:31.289548 1101836 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:22:31.293179 1101836 out.go:177] * Restarting existing docker container for "no-preload-460226" ...
	I0918 21:22:27.584518 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:29.584757 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:31.295272 1101836 cli_runner.go:164] Run: docker start no-preload-460226
	I0918 21:22:31.616484 1101836 cli_runner.go:164] Run: docker container inspect no-preload-460226 --format={{.State.Status}}
	I0918 21:22:31.636391 1101836 kic.go:430] container "no-preload-460226" state is running.
	I0918 21:22:31.637025 1101836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-460226
	I0918 21:22:31.664879 1101836 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/config.json ...
	I0918 21:22:31.665536 1101836 machine.go:93] provisionDockerMachine start ...
	I0918 21:22:31.665774 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:31.692579 1101836 main.go:141] libmachine: Using SSH client type: native
	I0918 21:22:31.693114 1101836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34180 <nil> <nil>}
	I0918 21:22:31.693132 1101836 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:22:31.693657 1101836 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42938->127.0.0.1:34180: read: connection reset by peer
	I0918 21:22:34.843338 1101836 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-460226
	
	I0918 21:22:34.843379 1101836 ubuntu.go:169] provisioning hostname "no-preload-460226"
	I0918 21:22:34.843445 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:34.861230 1101836 main.go:141] libmachine: Using SSH client type: native
	I0918 21:22:34.861493 1101836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34180 <nil> <nil>}
	I0918 21:22:34.861517 1101836 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-460226 && echo "no-preload-460226" | sudo tee /etc/hostname
	I0918 21:22:35.036062 1101836 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-460226
	
	I0918 21:22:35.036180 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:35.053650 1101836 main.go:141] libmachine: Using SSH client type: native
	I0918 21:22:35.053897 1101836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34180 <nil> <nil>}
	I0918 21:22:35.053921 1101836 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-460226' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-460226/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-460226' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:22:35.204115 1101836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:22:35.204154 1101836 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19667-874114/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-874114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-874114/.minikube}
	I0918 21:22:35.204178 1101836 ubuntu.go:177] setting up certificates
	I0918 21:22:35.204187 1101836 provision.go:84] configureAuth start
	I0918 21:22:35.204259 1101836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-460226
	I0918 21:22:35.221143 1101836 provision.go:143] copyHostCerts
	I0918 21:22:35.221217 1101836 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-874114/.minikube/ca.pem, removing ...
	I0918 21:22:35.221233 1101836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-874114/.minikube/ca.pem
	I0918 21:22:35.221316 1101836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-874114/.minikube/ca.pem (1082 bytes)
	I0918 21:22:35.221432 1101836 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-874114/.minikube/cert.pem, removing ...
	I0918 21:22:35.221443 1101836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-874114/.minikube/cert.pem
	I0918 21:22:35.221473 1101836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-874114/.minikube/cert.pem (1123 bytes)
	I0918 21:22:35.221545 1101836 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-874114/.minikube/key.pem, removing ...
	I0918 21:22:35.221555 1101836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-874114/.minikube/key.pem
	I0918 21:22:35.221582 1101836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-874114/.minikube/key.pem (1679 bytes)
	I0918 21:22:35.221644 1101836 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-874114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca-key.pem org=jenkins.no-preload-460226 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-460226]
	I0918 21:22:35.497034 1101836 provision.go:177] copyRemoteCerts
	I0918 21:22:35.497104 1101836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:22:35.497145 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:35.514561 1101836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/no-preload-460226/id_rsa Username:docker}
	I0918 21:22:35.621413 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:22:35.653679 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:22:35.678849 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0918 21:22:35.704664 1101836 provision.go:87] duration metric: took 500.458797ms to configureAuth
	I0918 21:22:35.704711 1101836 ubuntu.go:193] setting minikube options for container-runtime
	I0918 21:22:35.704926 1101836 config.go:182] Loaded profile config "no-preload-460226": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 21:22:35.704938 1101836 machine.go:96] duration metric: took 4.039389486s to provisionDockerMachine
	I0918 21:22:35.704947 1101836 start.go:293] postStartSetup for "no-preload-460226" (driver="docker")
	I0918 21:22:35.704957 1101836 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:22:35.705016 1101836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:22:35.705059 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:35.722018 1101836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/no-preload-460226/id_rsa Username:docker}
	I0918 21:22:35.828976 1101836 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:22:35.832034 1101836 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 21:22:35.832069 1101836 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 21:22:35.832104 1101836 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 21:22:35.832118 1101836 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0918 21:22:35.832130 1101836 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-874114/.minikube/addons for local assets ...
	I0918 21:22:35.832195 1101836 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-874114/.minikube/files for local assets ...
	I0918 21:22:35.832291 1101836 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-874114/.minikube/files/etc/ssl/certs/8794972.pem -> 8794972.pem in /etc/ssl/certs
	I0918 21:22:35.832399 1101836 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:22:35.841207 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/files/etc/ssl/certs/8794972.pem --> /etc/ssl/certs/8794972.pem (1708 bytes)
	I0918 21:22:35.865383 1101836 start.go:296] duration metric: took 160.406826ms for postStartSetup
	I0918 21:22:35.865513 1101836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 21:22:35.865563 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:32.087354 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:34.583990 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:36.584482 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:35.881850 1101836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/no-preload-460226/id_rsa Username:docker}
	I0918 21:22:35.982262 1101836 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 21:22:35.989671 1101836 fix.go:56] duration metric: took 4.717140459s for fixHost
	I0918 21:22:35.989698 1101836 start.go:83] releasing machines lock for "no-preload-460226", held for 4.717196681s
	I0918 21:22:35.989780 1101836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-460226
	I0918 21:22:36.014448 1101836 ssh_runner.go:195] Run: cat /version.json
	I0918 21:22:36.014512 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:36.014854 1101836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:22:36.014957 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:36.035936 1101836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/no-preload-460226/id_rsa Username:docker}
	I0918 21:22:36.047092 1101836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/no-preload-460226/id_rsa Username:docker}
	I0918 21:22:36.285878 1101836 ssh_runner.go:195] Run: systemctl --version
	I0918 21:22:36.290107 1101836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 21:22:36.294592 1101836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0918 21:22:36.312945 1101836 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0918 21:22:36.313030 1101836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:22:36.321567 1101836 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0918 21:22:36.321595 1101836 start.go:495] detecting cgroup driver to use...
	I0918 21:22:36.321629 1101836 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 21:22:36.321678 1101836 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 21:22:36.335049 1101836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 21:22:36.347518 1101836 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:22:36.347627 1101836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:22:36.360567 1101836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:22:36.373160 1101836 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:22:36.466239 1101836 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:22:36.552944 1101836 docker.go:233] disabling docker service ...
	I0918 21:22:36.553061 1101836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:22:36.566099 1101836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:22:36.578471 1101836 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:22:36.679848 1101836 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:22:36.776247 1101836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:22:36.788404 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:22:36.804804 1101836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0918 21:22:36.815384 1101836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 21:22:36.825614 1101836 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 21:22:36.825690 1101836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 21:22:36.835917 1101836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 21:22:36.846036 1101836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 21:22:36.855774 1101836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 21:22:36.866118 1101836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:22:36.876013 1101836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 21:22:36.886116 1101836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0918 21:22:36.896439 1101836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0918 21:22:36.907368 1101836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:22:36.917014 1101836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:22:36.925373 1101836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:22:37.018150 1101836 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 21:22:37.190060 1101836 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0918 21:22:37.190136 1101836 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0918 21:22:37.194605 1101836 start.go:563] Will wait 60s for crictl version
	I0918 21:22:37.194675 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:22:37.199538 1101836 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:22:37.236496 1101836 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0918 21:22:37.236623 1101836 ssh_runner.go:195] Run: containerd --version
	I0918 21:22:37.259431 1101836 ssh_runner.go:195] Run: containerd --version
	I0918 21:22:37.297250 1101836 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0918 21:22:37.298894 1101836 cli_runner.go:164] Run: docker network inspect no-preload-460226 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 21:22:37.315228 1101836 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0918 21:22:37.318953 1101836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:22:37.329776 1101836 kubeadm.go:883] updating cluster {Name:no-preload-460226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-460226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:22:37.329907 1101836 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0918 21:22:37.329953 1101836 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:22:37.370689 1101836 containerd.go:627] all images are preloaded for containerd runtime.
	I0918 21:22:37.370721 1101836 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:22:37.370729 1101836 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.1 containerd true true} ...
	I0918 21:22:37.370843 1101836 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-460226 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-460226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:22:37.370924 1101836 ssh_runner.go:195] Run: sudo crictl info
	I0918 21:22:37.417790 1101836 cni.go:84] Creating CNI manager for ""
	I0918 21:22:37.417813 1101836 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0918 21:22:37.417824 1101836 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:22:37.417868 1101836 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-460226 NodeName:no-preload-460226 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:22:37.418027 1101836 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-460226"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:22:37.418101 1101836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:22:37.427889 1101836 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:22:37.427965 1101836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:22:37.436828 1101836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0918 21:22:37.460889 1101836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:22:37.479704 1101836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0918 21:22:37.499383 1101836 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0918 21:22:37.502887 1101836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:22:37.513982 1101836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:22:37.597119 1101836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:22:37.612016 1101836 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226 for IP: 192.168.76.2
	I0918 21:22:37.612040 1101836 certs.go:194] generating shared ca certs ...
	I0918 21:22:37.612056 1101836 certs.go:226] acquiring lock for ca certs: {Name:mk4a2e50bce1acd2df63eb42e5a33734237a5b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:22:37.612264 1101836 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-874114/.minikube/ca.key
	I0918 21:22:37.612327 1101836 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.key
	I0918 21:22:37.612341 1101836 certs.go:256] generating profile certs ...
	I0918 21:22:37.612457 1101836 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.key
	I0918 21:22:37.612534 1101836 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/apiserver.key.4f8cbef1
	I0918 21:22:37.612609 1101836 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/proxy-client.key
	I0918 21:22:37.612772 1101836 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/879497.pem (1338 bytes)
	W0918 21:22:37.612826 1101836 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-874114/.minikube/certs/879497_empty.pem, impossibly tiny 0 bytes
	I0918 21:22:37.612841 1101836 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 21:22:37.612870 1101836 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:22:37.612925 1101836 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:22:37.612958 1101836 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/certs/key.pem (1679 bytes)
	I0918 21:22:37.613033 1101836 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-874114/.minikube/files/etc/ssl/certs/8794972.pem (1708 bytes)
	I0918 21:22:37.613657 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:22:37.647329 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 21:22:37.677628 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:22:37.713793 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:22:37.742659 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 21:22:37.769613 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:22:37.799403 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:22:37.825377 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:22:37.851401 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:22:37.876394 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/certs/879497.pem --> /usr/share/ca-certificates/879497.pem (1338 bytes)
	I0918 21:22:37.912692 1101836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-874114/.minikube/files/etc/ssl/certs/8794972.pem --> /usr/share/ca-certificates/8794972.pem (1708 bytes)
	I0918 21:22:37.938741 1101836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:22:37.958969 1101836 ssh_runner.go:195] Run: openssl version
	I0918 21:22:37.967042 1101836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:22:37.978208 1101836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:22:37.982261 1101836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 20:26 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:22:37.982335 1101836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:22:37.991122 1101836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:22:38.007168 1101836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/879497.pem && ln -fs /usr/share/ca-certificates/879497.pem /etc/ssl/certs/879497.pem"
	I0918 21:22:38.019081 1101836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/879497.pem
	I0918 21:22:38.023396 1101836 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 20:36 /usr/share/ca-certificates/879497.pem
	I0918 21:22:38.023478 1101836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/879497.pem
	I0918 21:22:38.031840 1101836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/879497.pem /etc/ssl/certs/51391683.0"
	I0918 21:22:38.042408 1101836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8794972.pem && ln -fs /usr/share/ca-certificates/8794972.pem /etc/ssl/certs/8794972.pem"
	I0918 21:22:38.053603 1101836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8794972.pem
	I0918 21:22:38.057990 1101836 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 20:36 /usr/share/ca-certificates/8794972.pem
	I0918 21:22:38.058133 1101836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8794972.pem
	I0918 21:22:38.066054 1101836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8794972.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:22:38.077705 1101836 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:22:38.083934 1101836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:22:38.091308 1101836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:22:38.098890 1101836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:22:38.106124 1101836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:22:38.113276 1101836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:22:38.120595 1101836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:22:38.127633 1101836 kubeadm.go:392] StartCluster: {Name:no-preload-460226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-460226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:22:38.127731 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0918 21:22:38.127832 1101836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:22:38.170312 1101836 cri.go:89] found id: "4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3"
	I0918 21:22:38.170335 1101836 cri.go:89] found id: "567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a"
	I0918 21:22:38.170340 1101836 cri.go:89] found id: "fa264bdcf4a0a1cb20dc9a1c077b07bb033e320b72f3b351c61a8c65d383380b"
	I0918 21:22:38.170344 1101836 cri.go:89] found id: "17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709"
	I0918 21:22:38.170357 1101836 cri.go:89] found id: "4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7"
	I0918 21:22:38.170362 1101836 cri.go:89] found id: "1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a"
	I0918 21:22:38.170365 1101836 cri.go:89] found id: "5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a"
	I0918 21:22:38.170368 1101836 cri.go:89] found id: "06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1"
	I0918 21:22:38.170372 1101836 cri.go:89] found id: ""
	I0918 21:22:38.170426 1101836 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0918 21:22:38.182881 1101836 cri.go:116] JSON = null
	W0918 21:22:38.182933 1101836 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0918 21:22:38.183004 1101836 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:22:38.192448 1101836 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:22:38.192471 1101836 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:22:38.192527 1101836 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:22:38.202248 1101836 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:22:38.202839 1101836 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-460226" does not appear in /home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 21:22:38.203111 1101836 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-874114/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-460226" cluster setting kubeconfig missing "no-preload-460226" context setting]
	I0918 21:22:38.203571 1101836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/kubeconfig: {Name:mke33cc40bb5f82b15bbe41884ab27179b9ca37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:22:38.205265 1101836 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:22:38.215735 1101836 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0918 21:22:38.215768 1101836 kubeadm.go:597] duration metric: took 23.290356ms to restartPrimaryControlPlane
	I0918 21:22:38.215777 1101836 kubeadm.go:394] duration metric: took 88.153172ms to StartCluster
	I0918 21:22:38.215792 1101836 settings.go:142] acquiring lock: {Name:mk57bc44f9fec4b4923bac0bde72e24bb39c4097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:22:38.215851 1101836 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 21:22:38.216904 1101836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-874114/kubeconfig: {Name:mke33cc40bb5f82b15bbe41884ab27179b9ca37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:22:38.217105 1101836 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0918 21:22:38.217377 1101836 config.go:182] Loaded profile config "no-preload-460226": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 21:22:38.217423 1101836 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:22:38.217495 1101836 addons.go:69] Setting storage-provisioner=true in profile "no-preload-460226"
	I0918 21:22:38.217511 1101836 addons.go:234] Setting addon storage-provisioner=true in "no-preload-460226"
	W0918 21:22:38.217517 1101836 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:22:38.217537 1101836 host.go:66] Checking if "no-preload-460226" exists ...
	I0918 21:22:38.217620 1101836 addons.go:69] Setting default-storageclass=true in profile "no-preload-460226"
	I0918 21:22:38.217677 1101836 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-460226"
	I0918 21:22:38.217983 1101836 cli_runner.go:164] Run: docker container inspect no-preload-460226 --format={{.State.Status}}
	I0918 21:22:38.218161 1101836 cli_runner.go:164] Run: docker container inspect no-preload-460226 --format={{.State.Status}}
	I0918 21:22:38.218626 1101836 addons.go:69] Setting dashboard=true in profile "no-preload-460226"
	I0918 21:22:38.218712 1101836 addons.go:234] Setting addon dashboard=true in "no-preload-460226"
	W0918 21:22:38.218739 1101836 addons.go:243] addon dashboard should already be in state true
	I0918 21:22:38.218795 1101836 host.go:66] Checking if "no-preload-460226" exists ...
	I0918 21:22:38.219457 1101836 cli_runner.go:164] Run: docker container inspect no-preload-460226 --format={{.State.Status}}
	I0918 21:22:38.222968 1101836 addons.go:69] Setting metrics-server=true in profile "no-preload-460226"
	I0918 21:22:38.223004 1101836 addons.go:234] Setting addon metrics-server=true in "no-preload-460226"
	W0918 21:22:38.223020 1101836 addons.go:243] addon metrics-server should already be in state true
	I0918 21:22:38.223055 1101836 host.go:66] Checking if "no-preload-460226" exists ...
	I0918 21:22:38.223561 1101836 cli_runner.go:164] Run: docker container inspect no-preload-460226 --format={{.State.Status}}
	I0918 21:22:38.223906 1101836 out.go:177] * Verifying Kubernetes components...
	I0918 21:22:38.227168 1101836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:22:38.293354 1101836 addons.go:234] Setting addon default-storageclass=true in "no-preload-460226"
	W0918 21:22:38.293383 1101836 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:22:38.293411 1101836 host.go:66] Checking if "no-preload-460226" exists ...
	I0918 21:22:38.293867 1101836 cli_runner.go:164] Run: docker container inspect no-preload-460226 --format={{.State.Status}}
	I0918 21:22:38.302221 1101836 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:22:38.307232 1101836 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:22:38.307285 1101836 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:22:38.307358 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:38.309288 1101836 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0918 21:22:38.312876 1101836 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0918 21:22:38.314750 1101836 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0918 21:22:38.314786 1101836 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0918 21:22:38.314854 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:38.320722 1101836 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:22:38.322414 1101836 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:22:38.322435 1101836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:22:38.322503 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:38.357996 1101836 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:22:38.358021 1101836 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:22:38.358173 1101836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-460226
	I0918 21:22:38.387681 1101836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/no-preload-460226/id_rsa Username:docker}
	I0918 21:22:38.388161 1101836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/no-preload-460226/id_rsa Username:docker}
	I0918 21:22:38.393940 1101836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/no-preload-460226/id_rsa Username:docker}
	I0918 21:22:38.419340 1101836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/no-preload-460226/id_rsa Username:docker}
	I0918 21:22:38.444306 1101836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:22:38.507909 1101836 node_ready.go:35] waiting up to 6m0s for node "no-preload-460226" to be "Ready" ...
	I0918 21:22:38.635189 1101836 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0918 21:22:38.635213 1101836 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0918 21:22:38.708230 1101836 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:22:38.708293 1101836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:22:38.714781 1101836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:22:38.743562 1101836 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0918 21:22:38.743628 1101836 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0918 21:22:38.798658 1101836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:22:38.806239 1101836 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0918 21:22:38.806304 1101836 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0918 21:22:38.862252 1101836 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:22:38.862341 1101836 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:22:38.986830 1101836 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0918 21:22:38.986856 1101836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0918 21:22:39.042173 1101836 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:22:39.042203 1101836 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:22:39.242464 1101836 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0918 21:22:39.242498 1101836 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0918 21:22:39.302565 1101836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:22:39.326645 1101836 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0918 21:22:39.326701 1101836 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0918 21:22:39.396144 1101836 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0918 21:22:39.396179 1101836 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0918 21:22:39.493315 1101836 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0918 21:22:39.493344 1101836 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0918 21:22:39.579393 1101836 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0918 21:22:39.579417 1101836 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0918 21:22:39.660472 1101836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0918 21:22:38.585452 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:41.087086 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:43.306374 1101836 node_ready.go:49] node "no-preload-460226" has status "Ready":"True"
	I0918 21:22:43.306448 1101836 node_ready.go:38] duration metric: took 4.798484907s for node "no-preload-460226" to be "Ready" ...
	I0918 21:22:43.306476 1101836 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:22:43.432710 1101836 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sd2t5" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.499364 1101836 pod_ready.go:93] pod "coredns-7c65d6cfc9-sd2t5" in "kube-system" namespace has status "Ready":"True"
	I0918 21:22:43.499392 1101836 pod_ready.go:82] duration metric: took 66.64338ms for pod "coredns-7c65d6cfc9-sd2t5" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.499405 1101836 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-460226" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.512980 1101836 pod_ready.go:93] pod "etcd-no-preload-460226" in "kube-system" namespace has status "Ready":"True"
	I0918 21:22:43.513015 1101836 pod_ready.go:82] duration metric: took 13.601433ms for pod "etcd-no-preload-460226" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.513032 1101836 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-460226" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.521516 1101836 pod_ready.go:93] pod "kube-apiserver-no-preload-460226" in "kube-system" namespace has status "Ready":"True"
	I0918 21:22:43.521547 1101836 pod_ready.go:82] duration metric: took 8.507151ms for pod "kube-apiserver-no-preload-460226" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.521559 1101836 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-460226" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.534273 1101836 pod_ready.go:93] pod "kube-controller-manager-no-preload-460226" in "kube-system" namespace has status "Ready":"True"
	I0918 21:22:43.534301 1101836 pod_ready.go:82] duration metric: took 12.734376ms for pod "kube-controller-manager-no-preload-460226" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.534314 1101836 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-84bl9" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.546535 1101836 pod_ready.go:93] pod "kube-proxy-84bl9" in "kube-system" namespace has status "Ready":"True"
	I0918 21:22:43.546568 1101836 pod_ready.go:82] duration metric: took 12.238976ms for pod "kube-proxy-84bl9" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.546579 1101836 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-460226" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.950840 1101836 pod_ready.go:93] pod "kube-scheduler-no-preload-460226" in "kube-system" namespace has status "Ready":"True"
	I0918 21:22:43.950867 1101836 pod_ready.go:82] duration metric: took 404.28014ms for pod "kube-scheduler-no-preload-460226" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.950887 1101836 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:43.583288 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:45.583879 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:45.968976 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:46.517594 1101836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.802737906s)
	I0918 21:22:46.517700 1101836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.71897524s)
	I0918 21:22:46.837005 1101836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.534389037s)
	I0918 21:22:46.837050 1101836 addons.go:475] Verifying addon metrics-server=true in "no-preload-460226"
	I0918 21:22:46.886838 1101836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.226052061s)
	I0918 21:22:46.900258 1101836 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-460226 addons enable metrics-server
	
	I0918 21:22:46.902509 1101836 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0918 21:22:46.904490 1101836 addons.go:510] duration metric: took 8.687060627s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0918 21:22:48.457108 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:50.457483 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:47.584005 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:49.584297 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:51.585024 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:52.957345 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:55.474206 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:53.585054 1096772 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:56.085819 1096772 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"True"
	I0918 21:22:56.085846 1096772 pod_ready.go:82] duration metric: took 54.008077221s for pod "kube-controller-manager-old-k8s-version-025914" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:56.085859 1096772 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gtz6t" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:56.095245 1096772 pod_ready.go:93] pod "kube-proxy-gtz6t" in "kube-system" namespace has status "Ready":"True"
	I0918 21:22:56.095271 1096772 pod_ready.go:82] duration metric: took 9.404099ms for pod "kube-proxy-gtz6t" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:56.095288 1096772 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace to be "Ready" ...
	I0918 21:22:57.957156 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:59.958884 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:22:58.102579 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:00.166311 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:02.456664 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:04.957012 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:02.601254 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:05.105072 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:07.456946 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:09.957051 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:07.601512 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:09.602768 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:11.959405 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:14.457553 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:12.103103 1096772 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:13.101785 1096772 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace has status "Ready":"True"
	I0918 21:23:13.101813 1096772 pod_ready.go:82] duration metric: took 17.006517353s for pod "kube-scheduler-old-k8s-version-025914" in "kube-system" namespace to be "Ready" ...
	I0918 21:23:13.101826 1096772 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace to be "Ready" ...
	I0918 21:23:15.110518 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:16.957341 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:19.457393 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:17.607133 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:19.608703 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:21.470269 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:23.957026 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:22.110047 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:24.607331 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:26.608568 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:26.456352 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:28.456963 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:30.457251 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:29.108392 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:31.109872 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:32.457338 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:34.457649 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:33.608636 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:36.107501 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:36.956738 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:38.957540 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:38.108616 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:40.110093 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:40.957674 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:43.459541 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:42.607484 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:44.608168 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:46.608298 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:45.957373 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:48.456320 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:50.457159 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:49.107716 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:51.113501 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:52.958342 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:55.457144 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:53.608404 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:56.108272 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:57.957234 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:00.470222 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:23:58.607923 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:00.611020 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:02.957793 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:05.457692 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:03.108929 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:05.109771 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:07.956980 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:10.456736 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:07.608668 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:09.608971 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:12.457144 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:14.457679 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:12.108683 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:14.638422 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:16.458714 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:18.461388 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:17.108068 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:19.108679 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:21.607535 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:20.957040 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:22.957866 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:25.457721 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:23.608133 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:26.107445 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:27.457856 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:29.461131 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:28.109157 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:30.110258 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:31.956807 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:33.956950 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:32.608426 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:35.109127 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:35.957420 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:38.457444 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:40.457741 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:37.607560 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:39.609783 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:42.957469 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:45.458414 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:42.111317 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:44.608598 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:47.957514 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:50.456635 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:47.108627 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:49.608213 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:52.457600 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:54.458102 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:52.109051 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:54.607487 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:56.608593 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:56.956886 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:58.957836 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:24:59.108181 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:01.109061 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:01.457603 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:03.457912 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:05.459013 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:03.608636 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:06.107683 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:07.957025 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:09.963158 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:08.110547 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:10.609335 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:12.458302 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:14.956901 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:13.108218 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:15.130471 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:16.957534 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:19.458317 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:17.607342 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:19.608453 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:21.956991 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:23.957194 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:22.108055 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:24.109937 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:26.607444 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:25.957462 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:28.456834 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:30.457351 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:28.607534 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:30.608291 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:32.957428 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:35.457175 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:33.108540 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:35.109640 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:37.957884 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:40.456811 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:37.607933 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:39.608384 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:42.957039 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:45.457320 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:42.125244 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:44.607391 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:46.607948 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:47.458533 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:49.957903 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:49.107687 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:51.110223 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:52.456668 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:54.457139 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:53.607778 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:56.107955 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:56.957341 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:59.457390 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:25:58.607210 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:00.608463 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:01.457591 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:03.957697 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:03.108557 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:05.108760 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:06.457281 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:08.466930 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:07.109757 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:09.608447 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:10.958697 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:13.457529 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:15.458141 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:12.108605 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:14.608040 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:16.608132 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:17.458485 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:19.957086 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:18.608381 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:21.108125 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:22.456813 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:24.457779 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:23.109333 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:25.608568 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:26.956474 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:28.957461 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:28.108809 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:30.112155 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:30.957616 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:33.457068 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:35.458077 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:32.609879 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:34.614885 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:37.957717 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:40.457297 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:37.107667 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:39.108977 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:41.607672 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:42.457598 1101836 pod_ready.go:103] pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:43.956617 1101836 pod_ready.go:82] duration metric: took 4m0.005714616s for pod "metrics-server-6867b74b74-w984l" in "kube-system" namespace to be "Ready" ...
	E0918 21:26:43.956641 1101836 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0918 21:26:43.956652 1101836 pod_ready.go:39] duration metric: took 4m0.65014997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:26:43.956668 1101836 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:26:43.956698 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:26:43.956760 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:26:44.007432 1101836 cri.go:89] found id: "0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03"
	I0918 21:26:44.007456 1101836 cri.go:89] found id: "4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7"
	I0918 21:26:44.007462 1101836 cri.go:89] found id: ""
	I0918 21:26:44.007479 1101836 logs.go:276] 2 containers: [0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03 4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7]
	I0918 21:26:44.007546 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.011503 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.015375 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0918 21:26:44.015453 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:26:44.054558 1101836 cri.go:89] found id: "68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825"
	I0918 21:26:44.054580 1101836 cri.go:89] found id: "06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1"
	I0918 21:26:44.054585 1101836 cri.go:89] found id: ""
	I0918 21:26:44.054592 1101836 logs.go:276] 2 containers: [68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825 06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1]
	I0918 21:26:44.054679 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.058505 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.062044 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0918 21:26:44.062116 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:26:44.101128 1101836 cri.go:89] found id: "629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed"
	I0918 21:26:44.101152 1101836 cri.go:89] found id: "4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3"
	I0918 21:26:44.101160 1101836 cri.go:89] found id: ""
	I0918 21:26:44.101168 1101836 logs.go:276] 2 containers: [629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed 4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3]
	I0918 21:26:44.101227 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.106907 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.111120 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:26:44.111193 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:26:44.156515 1101836 cri.go:89] found id: "57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580"
	I0918 21:26:44.156550 1101836 cri.go:89] found id: "5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a"
	I0918 21:26:44.156555 1101836 cri.go:89] found id: ""
	I0918 21:26:44.156562 1101836 logs.go:276] 2 containers: [57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580 5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a]
	I0918 21:26:44.156634 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.160577 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.163709 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:26:44.163781 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:26:44.203403 1101836 cri.go:89] found id: "0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2"
	I0918 21:26:44.203478 1101836 cri.go:89] found id: "17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709"
	I0918 21:26:44.203498 1101836 cri.go:89] found id: ""
	I0918 21:26:44.203520 1101836 logs.go:276] 2 containers: [0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2 17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709]
	I0918 21:26:44.203608 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.207689 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.211045 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:26:44.211119 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:26:44.257611 1101836 cri.go:89] found id: "a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d"
	I0918 21:26:44.257632 1101836 cri.go:89] found id: "1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a"
	I0918 21:26:44.257638 1101836 cri.go:89] found id: ""
	I0918 21:26:44.257645 1101836 logs.go:276] 2 containers: [a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d 1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a]
	I0918 21:26:44.257702 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.261489 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.265424 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0918 21:26:44.265522 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:26:44.303861 1101836 cri.go:89] found id: "5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b"
	I0918 21:26:44.303940 1101836 cri.go:89] found id: "567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a"
	I0918 21:26:44.303959 1101836 cri.go:89] found id: ""
	I0918 21:26:44.303991 1101836 logs.go:276] 2 containers: [5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b 567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a]
	I0918 21:26:44.304070 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.308119 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.311644 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:26:44.311739 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:26:44.355554 1101836 cri.go:89] found id: "2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232"
	I0918 21:26:44.355630 1101836 cri.go:89] found id: ""
	I0918 21:26:44.355652 1101836 logs.go:276] 1 containers: [2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232]
	I0918 21:26:44.355733 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.359387 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:26:44.359481 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:26:44.397978 1101836 cri.go:89] found id: "594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743"
	I0918 21:26:44.398040 1101836 cri.go:89] found id: "d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f"
	I0918 21:26:44.398051 1101836 cri.go:89] found id: ""
	I0918 21:26:44.398059 1101836 logs.go:276] 2 containers: [594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743 d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f]
	I0918 21:26:44.398117 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.401878 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:44.405133 1101836 logs.go:123] Gathering logs for kubelet ...
	I0918 21:26:44.405157 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:26:44.449035 1101836 logs.go:138] Found kubelet problem: Sep 18 21:22:47 no-preload-460226 kubelet[657]: W0918 21:22:47.380285     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-460226" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-460226' and this object
	W0918 21:26:44.449365 1101836 logs.go:138] Found kubelet problem: Sep 18 21:22:47 no-preload-460226 kubelet[657]: E0918 21:22:47.380496     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-460226\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-460226' and this object" logger="UnhandledError"
	I0918 21:26:44.479937 1101836 logs.go:123] Gathering logs for etcd [06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1] ...
	I0918 21:26:44.479975 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1"
	I0918 21:26:44.533375 1101836 logs.go:123] Gathering logs for kube-scheduler [57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580] ...
	I0918 21:26:44.533406 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580"
	I0918 21:26:44.582576 1101836 logs.go:123] Gathering logs for kube-proxy [17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709] ...
	I0918 21:26:44.582609 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709"
	I0918 21:26:44.638887 1101836 logs.go:123] Gathering logs for kube-controller-manager [a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d] ...
	I0918 21:26:44.638918 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d"
	I0918 21:26:44.729792 1101836 logs.go:123] Gathering logs for storage-provisioner [594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743] ...
	I0918 21:26:44.729831 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743"
	I0918 21:26:44.771078 1101836 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:26:44.771109 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:26:44.922029 1101836 logs.go:123] Gathering logs for etcd [68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825] ...
	I0918 21:26:44.922067 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825"
	I0918 21:26:44.972658 1101836 logs.go:123] Gathering logs for kindnet [5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b] ...
	I0918 21:26:44.972688 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b"
	I0918 21:26:45.055516 1101836 logs.go:123] Gathering logs for kubernetes-dashboard [2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232] ...
	I0918 21:26:45.055573 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232"
	I0918 21:26:45.173856 1101836 logs.go:123] Gathering logs for dmesg ...
	I0918 21:26:45.174001 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:26:45.199008 1101836 logs.go:123] Gathering logs for coredns [4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3] ...
	I0918 21:26:45.199047 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3"
	I0918 21:26:45.256655 1101836 logs.go:123] Gathering logs for kube-scheduler [5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a] ...
	I0918 21:26:45.256701 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a"
	I0918 21:26:45.330181 1101836 logs.go:123] Gathering logs for kube-controller-manager [1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a] ...
	I0918 21:26:45.330225 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a"
	I0918 21:26:45.406765 1101836 logs.go:123] Gathering logs for storage-provisioner [d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f] ...
	I0918 21:26:45.406863 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f"
	I0918 21:26:45.464039 1101836 logs.go:123] Gathering logs for container status ...
	I0918 21:26:45.464160 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:26:45.511545 1101836 logs.go:123] Gathering logs for kube-apiserver [0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03] ...
	I0918 21:26:45.511574 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03"
	I0918 21:26:45.564858 1101836 logs.go:123] Gathering logs for kube-apiserver [4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7] ...
	I0918 21:26:45.564891 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7"
	I0918 21:26:45.623215 1101836 logs.go:123] Gathering logs for coredns [629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed] ...
	I0918 21:26:45.623253 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed"
	I0918 21:26:45.664781 1101836 logs.go:123] Gathering logs for kube-proxy [0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2] ...
	I0918 21:26:45.664816 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2"
	I0918 21:26:45.703721 1101836 logs.go:123] Gathering logs for kindnet [567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a] ...
	I0918 21:26:45.703749 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a"
	I0918 21:26:45.743497 1101836 logs.go:123] Gathering logs for containerd ...
	I0918 21:26:45.743525 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0918 21:26:45.808326 1101836 out.go:358] Setting ErrFile to fd 2...
	I0918 21:26:45.808360 1101836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0918 21:26:45.808423 1101836 out.go:270] X Problems detected in kubelet:
	W0918 21:26:45.808435 1101836 out.go:270]   Sep 18 21:22:47 no-preload-460226 kubelet[657]: W0918 21:22:47.380285     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-460226" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-460226' and this object
	W0918 21:26:45.808448 1101836 out.go:270]   Sep 18 21:22:47 no-preload-460226 kubelet[657]: E0918 21:22:47.380496     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-460226\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-460226' and this object" logger="UnhandledError"
	I0918 21:26:45.808456 1101836 out.go:358] Setting ErrFile to fd 2...
	I0918 21:26:45.808467 1101836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:26:43.608503 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:45.608546 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:48.108036 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:50.110278 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:55.809764 1101836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:26:55.821706 1101836 api_server.go:72] duration metric: took 4m17.604563005s to wait for apiserver process to appear ...
	I0918 21:26:55.821736 1101836 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:26:55.821773 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:26:55.821838 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:26:55.866021 1101836 cri.go:89] found id: "0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03"
	I0918 21:26:55.866044 1101836 cri.go:89] found id: "4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7"
	I0918 21:26:55.866049 1101836 cri.go:89] found id: ""
	I0918 21:26:55.866057 1101836 logs.go:276] 2 containers: [0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03 4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7]
	I0918 21:26:55.866112 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:55.869869 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:55.873421 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0918 21:26:55.873498 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:26:52.608465 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:55.110186 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:55.919255 1101836 cri.go:89] found id: "68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825"
	I0918 21:26:55.919275 1101836 cri.go:89] found id: "06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1"
	I0918 21:26:55.919280 1101836 cri.go:89] found id: ""
	I0918 21:26:55.919288 1101836 logs.go:276] 2 containers: [68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825 06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1]
	I0918 21:26:55.919350 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:55.925127 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:55.929018 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0918 21:26:55.929089 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:26:55.966456 1101836 cri.go:89] found id: "629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed"
	I0918 21:26:55.966481 1101836 cri.go:89] found id: "4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3"
	I0918 21:26:55.966486 1101836 cri.go:89] found id: ""
	I0918 21:26:55.966494 1101836 logs.go:276] 2 containers: [629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed 4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3]
	I0918 21:26:55.966585 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:55.970047 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:55.975173 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:26:55.975263 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:26:56.027817 1101836 cri.go:89] found id: "57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580"
	I0918 21:26:56.027843 1101836 cri.go:89] found id: "5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a"
	I0918 21:26:56.027849 1101836 cri.go:89] found id: ""
	I0918 21:26:56.027856 1101836 logs.go:276] 2 containers: [57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580 5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a]
	I0918 21:26:56.027972 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:56.032728 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:56.036524 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:26:56.036630 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:26:56.091749 1101836 cri.go:89] found id: "0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2"
	I0918 21:26:56.091774 1101836 cri.go:89] found id: "17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709"
	I0918 21:26:56.091779 1101836 cri.go:89] found id: ""
	I0918 21:26:56.091786 1101836 logs.go:276] 2 containers: [0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2 17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709]
	I0918 21:26:56.091874 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:56.095957 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:56.100351 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:26:56.100472 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:26:56.151445 1101836 cri.go:89] found id: "a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d"
	I0918 21:26:56.151470 1101836 cri.go:89] found id: "1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a"
	I0918 21:26:56.151475 1101836 cri.go:89] found id: ""
	I0918 21:26:56.151482 1101836 logs.go:276] 2 containers: [a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d 1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a]
	I0918 21:26:56.151573 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:56.155114 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:56.158506 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0918 21:26:56.158592 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:26:56.206388 1101836 cri.go:89] found id: "5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b"
	I0918 21:26:56.206412 1101836 cri.go:89] found id: "567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a"
	I0918 21:26:56.206417 1101836 cri.go:89] found id: ""
	I0918 21:26:56.206424 1101836 logs.go:276] 2 containers: [5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b 567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a]
	I0918 21:26:56.206508 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:56.210217 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:56.213488 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:26:56.213553 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:26:56.250893 1101836 cri.go:89] found id: "594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743"
	I0918 21:26:56.250917 1101836 cri.go:89] found id: "d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f"
	I0918 21:26:56.250923 1101836 cri.go:89] found id: ""
	I0918 21:26:56.250930 1101836 logs.go:276] 2 containers: [594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743 d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f]
	I0918 21:26:56.251011 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:56.254857 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:56.258301 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:26:56.258402 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:26:56.297420 1101836 cri.go:89] found id: "2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232"
	I0918 21:26:56.297443 1101836 cri.go:89] found id: ""
	I0918 21:26:56.297451 1101836 logs.go:276] 1 containers: [2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232]
	I0918 21:26:56.297530 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:26:56.301114 1101836 logs.go:123] Gathering logs for kubelet ...
	I0918 21:26:56.301141 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:26:56.343257 1101836 logs.go:138] Found kubelet problem: Sep 18 21:22:47 no-preload-460226 kubelet[657]: W0918 21:22:47.380285     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-460226" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-460226' and this object
	W0918 21:26:56.343555 1101836 logs.go:138] Found kubelet problem: Sep 18 21:22:47 no-preload-460226 kubelet[657]: E0918 21:22:47.380496     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-460226\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-460226' and this object" logger="UnhandledError"
	I0918 21:26:56.374155 1101836 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:26:56.374193 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:26:56.517268 1101836 logs.go:123] Gathering logs for containerd ...
	I0918 21:26:56.517298 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0918 21:26:56.583876 1101836 logs.go:123] Gathering logs for storage-provisioner [594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743] ...
	I0918 21:26:56.583913 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743"
	I0918 21:26:56.633486 1101836 logs.go:123] Gathering logs for storage-provisioner [d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f] ...
	I0918 21:26:56.633516 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f"
	I0918 21:26:56.688920 1101836 logs.go:123] Gathering logs for container status ...
	I0918 21:26:56.688949 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:26:56.747958 1101836 logs.go:123] Gathering logs for etcd [68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825] ...
	I0918 21:26:56.747986 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825"
	I0918 21:26:56.792850 1101836 logs.go:123] Gathering logs for coredns [629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed] ...
	I0918 21:26:56.792883 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed"
	I0918 21:26:56.846842 1101836 logs.go:123] Gathering logs for kube-scheduler [57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580] ...
	I0918 21:26:56.846922 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580"
	I0918 21:26:56.886051 1101836 logs.go:123] Gathering logs for kube-scheduler [5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a] ...
	I0918 21:26:56.886082 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a"
	I0918 21:26:56.939862 1101836 logs.go:123] Gathering logs for kube-controller-manager [a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d] ...
	I0918 21:26:56.939896 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d"
	I0918 21:26:57.013111 1101836 logs.go:123] Gathering logs for kube-proxy [0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2] ...
	I0918 21:26:57.013148 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2"
	I0918 21:26:57.055284 1101836 logs.go:123] Gathering logs for kube-proxy [17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709] ...
	I0918 21:26:57.055316 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709"
	I0918 21:26:57.100788 1101836 logs.go:123] Gathering logs for kubernetes-dashboard [2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232] ...
	I0918 21:26:57.100824 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232"
	I0918 21:26:57.148156 1101836 logs.go:123] Gathering logs for kube-controller-manager [1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a] ...
	I0918 21:26:57.148187 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a"
	I0918 21:26:57.216584 1101836 logs.go:123] Gathering logs for kindnet [5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b] ...
	I0918 21:26:57.216620 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b"
	I0918 21:26:57.265466 1101836 logs.go:123] Gathering logs for kindnet [567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a] ...
	I0918 21:26:57.265502 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a"
	I0918 21:26:57.303033 1101836 logs.go:123] Gathering logs for dmesg ...
	I0918 21:26:57.303062 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:26:57.319407 1101836 logs.go:123] Gathering logs for kube-apiserver [0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03] ...
	I0918 21:26:57.319437 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03"
	I0918 21:26:57.370879 1101836 logs.go:123] Gathering logs for kube-apiserver [4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7] ...
	I0918 21:26:57.370911 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7"
	I0918 21:26:57.422430 1101836 logs.go:123] Gathering logs for etcd [06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1] ...
	I0918 21:26:57.422465 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1"
	I0918 21:26:57.485199 1101836 logs.go:123] Gathering logs for coredns [4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3] ...
	I0918 21:26:57.485236 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3"
	I0918 21:26:57.523037 1101836 out.go:358] Setting ErrFile to fd 2...
	I0918 21:26:57.523061 1101836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0918 21:26:57.523113 1101836 out.go:270] X Problems detected in kubelet:
	W0918 21:26:57.523130 1101836 out.go:270]   Sep 18 21:22:47 no-preload-460226 kubelet[657]: W0918 21:22:47.380285     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-460226" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-460226' and this object
	W0918 21:26:57.523139 1101836 out.go:270]   Sep 18 21:22:47 no-preload-460226 kubelet[657]: E0918 21:22:47.380496     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-460226\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-460226' and this object" logger="UnhandledError"
	I0918 21:26:57.523152 1101836 out.go:358] Setting ErrFile to fd 2...
	I0918 21:26:57.523159 1101836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:26:57.607401 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:59.608270 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:01.608361 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:04.108401 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:06.158568 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:07.525089 1101836 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0918 21:27:07.532748 1101836 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0918 21:27:07.533775 1101836 api_server.go:141] control plane version: v1.31.1
	I0918 21:27:07.533802 1101836 api_server.go:131] duration metric: took 11.712058797s to wait for apiserver health ...
	I0918 21:27:07.533810 1101836 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:27:07.533835 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:27:07.533898 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:27:07.573616 1101836 cri.go:89] found id: "0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03"
	I0918 21:27:07.573641 1101836 cri.go:89] found id: "4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7"
	I0918 21:27:07.573646 1101836 cri.go:89] found id: ""
	I0918 21:27:07.573654 1101836 logs.go:276] 2 containers: [0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03 4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7]
	I0918 21:27:07.573717 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.577392 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.581055 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0918 21:27:07.581133 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:27:07.626646 1101836 cri.go:89] found id: "68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825"
	I0918 21:27:07.626667 1101836 cri.go:89] found id: "06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1"
	I0918 21:27:07.626672 1101836 cri.go:89] found id: ""
	I0918 21:27:07.626679 1101836 logs.go:276] 2 containers: [68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825 06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1]
	I0918 21:27:07.626737 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.630453 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.633884 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0918 21:27:07.633957 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:27:07.674908 1101836 cri.go:89] found id: "629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed"
	I0918 21:27:07.674929 1101836 cri.go:89] found id: "4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3"
	I0918 21:27:07.674934 1101836 cri.go:89] found id: ""
	I0918 21:27:07.674941 1101836 logs.go:276] 2 containers: [629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed 4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3]
	I0918 21:27:07.674999 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.678431 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.682086 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:27:07.682176 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:27:07.721173 1101836 cri.go:89] found id: "57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580"
	I0918 21:27:07.721246 1101836 cri.go:89] found id: "5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a"
	I0918 21:27:07.721258 1101836 cri.go:89] found id: ""
	I0918 21:27:07.721267 1101836 logs.go:276] 2 containers: [57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580 5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a]
	I0918 21:27:07.721338 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.725231 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.729041 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:27:07.729133 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:27:07.767823 1101836 cri.go:89] found id: "0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2"
	I0918 21:27:07.767858 1101836 cri.go:89] found id: "17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709"
	I0918 21:27:07.767863 1101836 cri.go:89] found id: ""
	I0918 21:27:07.767870 1101836 logs.go:276] 2 containers: [0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2 17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709]
	I0918 21:27:07.767982 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.772011 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.775242 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:27:07.775345 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:27:07.813740 1101836 cri.go:89] found id: "a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d"
	I0918 21:27:07.813763 1101836 cri.go:89] found id: "1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a"
	I0918 21:27:07.813769 1101836 cri.go:89] found id: ""
	I0918 21:27:07.813825 1101836 logs.go:276] 2 containers: [a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d 1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a]
	I0918 21:27:07.813902 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.817554 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.821040 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0918 21:27:07.821119 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:27:07.864854 1101836 cri.go:89] found id: "5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b"
	I0918 21:27:07.864934 1101836 cri.go:89] found id: "567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a"
	I0918 21:27:07.864954 1101836 cri.go:89] found id: ""
	I0918 21:27:07.864964 1101836 logs.go:276] 2 containers: [5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b 567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a]
	I0918 21:27:07.865037 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.869043 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.872994 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:27:07.873123 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:27:07.920946 1101836 cri.go:89] found id: "2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232"
	I0918 21:27:07.920972 1101836 cri.go:89] found id: ""
	I0918 21:27:07.920992 1101836 logs.go:276] 1 containers: [2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232]
	I0918 21:27:07.921056 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.925010 1101836 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:27:07.925099 1101836 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:27:07.966397 1101836 cri.go:89] found id: "594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743"
	I0918 21:27:07.966418 1101836 cri.go:89] found id: "d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f"
	I0918 21:27:07.966424 1101836 cri.go:89] found id: ""
	I0918 21:27:07.966432 1101836 logs.go:276] 2 containers: [594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743 d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f]
	I0918 21:27:07.966512 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.970318 1101836 ssh_runner.go:195] Run: which crictl
	I0918 21:27:07.974133 1101836 logs.go:123] Gathering logs for kube-scheduler [5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a] ...
	I0918 21:27:07.974165 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c017066f72fdd0f23d9e81704f350987231c816bed0a2e51266b9f65788669a"
	I0918 21:27:08.032780 1101836 logs.go:123] Gathering logs for kindnet [567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a] ...
	I0918 21:27:08.032910 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 567c82a454c85abaccbb4039a7dd4f488e2f32d5aba7d738e49072a012cfcc7a"
	I0918 21:27:08.098743 1101836 logs.go:123] Gathering logs for etcd [68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825] ...
	I0918 21:27:08.098771 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68d0e11a6fff94b71fc1cfa583bbd2415c6c97b2b8747fe26aadac7bed656825"
	I0918 21:27:08.157387 1101836 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:27:08.157418 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:27:08.282708 1101836 logs.go:123] Gathering logs for kube-apiserver [0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03] ...
	I0918 21:27:08.282738 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ee3f828881b286763629a85437d3f070c54aba4e362f49d36410cdf4ea82b03"
	I0918 21:27:08.339344 1101836 logs.go:123] Gathering logs for kube-apiserver [4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7] ...
	I0918 21:27:08.339377 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b0d537c48a64429f287bc03f415247fffaa98ad2781d95dacac1051ae5a49a7"
	I0918 21:27:08.399204 1101836 logs.go:123] Gathering logs for coredns [4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3] ...
	I0918 21:27:08.403458 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4620fc70fd7781327181375f6cdf871a49cd1a5f446b3c651eafca6f63bebbe3"
	I0918 21:27:08.458154 1101836 logs.go:123] Gathering logs for kube-scheduler [57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580] ...
	I0918 21:27:08.458183 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57ba66d42bea813939afaed1921a66dfd71ef1941ced5bfb3df4fdef17ad6580"
	I0918 21:27:08.509445 1101836 logs.go:123] Gathering logs for kube-proxy [17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709] ...
	I0918 21:27:08.509474 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17b6d2a735b0ecc36f3f81af97bab8543dec6ffd3920f722eaa55d66a29cc709"
	I0918 21:27:08.548625 1101836 logs.go:123] Gathering logs for kindnet [5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b] ...
	I0918 21:27:08.548653 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5479eec1e58ebb906c4510430afe3e0119d24ec7ad2bae7a0605a470cac56e0b"
	I0918 21:27:08.590693 1101836 logs.go:123] Gathering logs for kubelet ...
	I0918 21:27:08.590722 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:27:08.639826 1101836 logs.go:138] Found kubelet problem: Sep 18 21:22:47 no-preload-460226 kubelet[657]: W0918 21:22:47.380285     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-460226" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-460226' and this object
	W0918 21:27:08.640093 1101836 logs.go:138] Found kubelet problem: Sep 18 21:22:47 no-preload-460226 kubelet[657]: E0918 21:22:47.380496     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-460226\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-460226' and this object" logger="UnhandledError"
	I0918 21:27:08.685521 1101836 logs.go:123] Gathering logs for storage-provisioner [594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743] ...
	I0918 21:27:08.685563 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 594e0a59074ca666a676434778a7732a87bee258e8b1c3239444d778923bb743"
	I0918 21:27:08.725714 1101836 logs.go:123] Gathering logs for kubernetes-dashboard [2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232] ...
	I0918 21:27:08.725743 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c3426895ca26d414db8ec718f35eab0387be17bce9442ee6a2fe3e78708b232"
	I0918 21:27:08.767564 1101836 logs.go:123] Gathering logs for coredns [629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed] ...
	I0918 21:27:08.767595 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 629154c9b386705a7f30bd6345a19587e4d84ade5d76660365d638201541feed"
	I0918 21:27:08.809666 1101836 logs.go:123] Gathering logs for kube-proxy [0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2] ...
	I0918 21:27:08.809696 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e4acd0b809fd66b15576a0d4c44f1ab604a7d11a395d9e50c3389f2468570e2"
	I0918 21:27:08.855724 1101836 logs.go:123] Gathering logs for kube-controller-manager [a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d] ...
	I0918 21:27:08.855763 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7ffdb7c6d60dcd38035d7dc56a9feb4e639dcb60b35ceb3a4d7de1c5f49b00d"
	I0918 21:27:08.935099 1101836 logs.go:123] Gathering logs for kube-controller-manager [1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a] ...
	I0918 21:27:08.935133 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b0085c9a80f3dc429f4f146f8bb6b710a53d7c9586beae3417bb6944622d58a"
	I0918 21:27:08.993468 1101836 logs.go:123] Gathering logs for container status ...
	I0918 21:27:08.993501 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:27:09.044296 1101836 logs.go:123] Gathering logs for dmesg ...
	I0918 21:27:09.044324 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:27:09.064103 1101836 logs.go:123] Gathering logs for storage-provisioner [d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f] ...
	I0918 21:27:09.064141 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9cd1c9c475933c679e698c0776b08baa644962fcf49ef0493fbc7e2c51ecd9f"
	I0918 21:27:09.113855 1101836 logs.go:123] Gathering logs for containerd ...
	I0918 21:27:09.113885 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0918 21:27:09.175198 1101836 logs.go:123] Gathering logs for etcd [06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1] ...
	I0918 21:27:09.175237 1101836 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06c4bbdf2a9fac8215f1a95053b43f1cc7a3ef524304f6dbd95b83c0b170aaf1"
	I0918 21:27:09.223135 1101836 out.go:358] Setting ErrFile to fd 2...
	I0918 21:27:09.223164 1101836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0918 21:27:09.223241 1101836 out.go:270] X Problems detected in kubelet:
	W0918 21:27:09.223256 1101836 out.go:270]   Sep 18 21:22:47 no-preload-460226 kubelet[657]: W0918 21:22:47.380285     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-460226" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-460226' and this object
	W0918 21:27:09.223284 1101836 out.go:270]   Sep 18 21:22:47 no-preload-460226 kubelet[657]: E0918 21:22:47.380496     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-460226\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-460226' and this object" logger="UnhandledError"
	I0918 21:27:09.223298 1101836 out.go:358] Setting ErrFile to fd 2...
	I0918 21:27:09.223304 1101836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:27:08.610247 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:11.108978 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:13.110231 1096772 pod_ready.go:103] pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace has status "Ready":"False"
	I0918 21:27:13.110264 1096772 pod_ready.go:82] duration metric: took 4m0.008430385s for pod "metrics-server-9975d5f86-vgp87" in "kube-system" namespace to be "Ready" ...
	E0918 21:27:13.110276 1096772 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0918 21:27:13.110284 1096772 pod_ready.go:39] duration metric: took 5m19.184329644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:27:13.110298 1096772 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:27:13.110331 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:27:13.110395 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:27:13.148685 1096772 cri.go:89] found id: "e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972"
	I0918 21:27:13.148711 1096772 cri.go:89] found id: "e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f"
	I0918 21:27:13.148728 1096772 cri.go:89] found id: ""
	I0918 21:27:13.148735 1096772 logs.go:276] 2 containers: [e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972 e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f]
	I0918 21:27:13.148793 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.152558 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.156017 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0918 21:27:13.156142 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:27:13.194545 1096772 cri.go:89] found id: "bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31"
	I0918 21:27:13.194568 1096772 cri.go:89] found id: "85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155"
	I0918 21:27:13.194573 1096772 cri.go:89] found id: ""
	I0918 21:27:13.194581 1096772 logs.go:276] 2 containers: [bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31 85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155]
	I0918 21:27:13.194640 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.198199 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.202224 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0918 21:27:13.202351 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:27:13.251899 1096772 cri.go:89] found id: "3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e"
	I0918 21:27:13.251920 1096772 cri.go:89] found id: "76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77"
	I0918 21:27:13.251925 1096772 cri.go:89] found id: ""
	I0918 21:27:13.251932 1096772 logs.go:276] 2 containers: [3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e 76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77]
	I0918 21:27:13.251994 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.255890 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.259521 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:27:13.259597 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:27:13.308887 1096772 cri.go:89] found id: "5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65"
	I0918 21:27:13.308908 1096772 cri.go:89] found id: "654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3"
	I0918 21:27:13.308913 1096772 cri.go:89] found id: ""
	I0918 21:27:13.308921 1096772 logs.go:276] 2 containers: [5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65 654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3]
	I0918 21:27:13.308984 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.312499 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.315800 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:27:13.315874 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:27:13.362479 1096772 cri.go:89] found id: "97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57"
	I0918 21:27:13.362503 1096772 cri.go:89] found id: "724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890"
	I0918 21:27:13.362509 1096772 cri.go:89] found id: ""
	I0918 21:27:13.362525 1096772 logs.go:276] 2 containers: [97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57 724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890]
	I0918 21:27:13.362625 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.366451 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.370153 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:27:13.370241 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:27:13.411494 1096772 cri.go:89] found id: "6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e"
	I0918 21:27:13.411520 1096772 cri.go:89] found id: "0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca"
	I0918 21:27:13.411526 1096772 cri.go:89] found id: ""
	I0918 21:27:13.411533 1096772 logs.go:276] 2 containers: [6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e 0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca]
	I0918 21:27:13.411591 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.415339 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.418757 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0918 21:27:13.418839 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:27:13.467492 1096772 cri.go:89] found id: "d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db"
	I0918 21:27:13.467516 1096772 cri.go:89] found id: "db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c"
	I0918 21:27:13.467521 1096772 cri.go:89] found id: ""
	I0918 21:27:13.467529 1096772 logs.go:276] 2 containers: [d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c]
	I0918 21:27:13.467589 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.471594 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.475548 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:27:13.475624 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:27:13.516183 1096772 cri.go:89] found id: "ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e"
	I0918 21:27:13.516211 1096772 cri.go:89] found id: "cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f"
	I0918 21:27:13.516217 1096772 cri.go:89] found id: ""
	I0918 21:27:13.516226 1096772 logs.go:276] 2 containers: [ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f]
	I0918 21:27:13.516288 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.520044 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.523799 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:27:13.523885 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:27:13.562638 1096772 cri.go:89] found id: "d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e"
	I0918 21:27:13.562700 1096772 cri.go:89] found id: ""
	I0918 21:27:13.562732 1096772 logs.go:276] 1 containers: [d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e]
	I0918 21:27:13.562819 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:13.566702 1096772 logs.go:123] Gathering logs for dmesg ...
	I0918 21:27:13.566777 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:27:13.584524 1096772 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:27:13.584556 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:27:13.731424 1096772 logs.go:123] Gathering logs for kube-apiserver [e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f] ...
	I0918 21:27:13.731457 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f"
	I0918 21:27:13.806786 1096772 logs.go:123] Gathering logs for etcd [bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31] ...
	I0918 21:27:13.806823 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31"
	I0918 21:27:13.861512 1096772 logs.go:123] Gathering logs for kube-controller-manager [6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e] ...
	I0918 21:27:13.861545 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e"
	I0918 21:27:13.920411 1096772 logs.go:123] Gathering logs for container status ...
	I0918 21:27:13.920451 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:27:13.965699 1096772 logs.go:123] Gathering logs for etcd [85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155] ...
	I0918 21:27:13.965731 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155"
	I0918 21:27:14.021328 1096772 logs.go:123] Gathering logs for kube-scheduler [5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65] ...
	I0918 21:27:14.021364 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65"
	I0918 21:27:14.068191 1096772 logs.go:123] Gathering logs for kindnet [d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db] ...
	I0918 21:27:14.068220 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db"
	I0918 21:27:14.132069 1096772 logs.go:123] Gathering logs for kubernetes-dashboard [d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e] ...
	I0918 21:27:14.132139 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e"
	I0918 21:27:14.174972 1096772 logs.go:123] Gathering logs for kube-apiserver [e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972] ...
	I0918 21:27:14.175006 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972"
	I0918 21:27:14.240583 1096772 logs.go:123] Gathering logs for coredns [76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77] ...
	I0918 21:27:14.240622 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77"
	I0918 21:27:14.286652 1096772 logs.go:123] Gathering logs for kube-scheduler [654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3] ...
	I0918 21:27:14.286685 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3"
	I0918 21:27:14.337176 1096772 logs.go:123] Gathering logs for kube-proxy [97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57] ...
	I0918 21:27:14.337214 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57"
	I0918 21:27:14.376811 1096772 logs.go:123] Gathering logs for kube-proxy [724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890] ...
	I0918 21:27:14.376901 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890"
	I0918 21:27:14.416199 1096772 logs.go:123] Gathering logs for kube-controller-manager [0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca] ...
	I0918 21:27:14.416229 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca"
	I0918 21:27:14.484345 1096772 logs.go:123] Gathering logs for storage-provisioner [cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f] ...
	I0918 21:27:14.484386 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f"
	I0918 21:27:14.523378 1096772 logs.go:123] Gathering logs for kubelet ...
	I0918 21:27:14.523407 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:27:14.585137 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:53 old-k8s-version-025914 kubelet[665]: E0918 21:21:53.794983     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-rqmbg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-rqmbg" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.585412 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:53 old-k8s-version-025914 kubelet[665]: E0918 21:21:53.796426     665 reflector.go:138] object-"kube-system"/"kindnet-token-xbssb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xbssb" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.589404 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022067     665 reflector.go:138] object-"kube-system"/"metrics-server-token-9b79x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9b79x" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.589616 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022444     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.589820 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022526     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.590060 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022568     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-n2hmt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-n2hmt" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.590275 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022715     665 reflector.go:138] object-"default"/"default-token-65brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-65brt" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.590491 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022772     665 reflector.go:138] object-"kube-system"/"coredns-token-jl4pr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jl4pr" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:14.598022 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:55 old-k8s-version-025914 kubelet[665]: E0918 21:21:55.759054     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:14.598215 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:55 old-k8s-version-025914 kubelet[665]: E0918 21:21:55.869815     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.603092 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:06 old-k8s-version-025914 kubelet[665]: E0918 21:22:06.685684     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:14.604813 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:19 old-k8s-version-025914 kubelet[665]: E0918 21:22:19.677495     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.605748 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:24 old-k8s-version-025914 kubelet[665]: E0918 21:22:24.020367     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.606081 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:25 old-k8s-version-025914 kubelet[665]: E0918 21:22:25.025956     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.606529 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:28 old-k8s-version-025914 kubelet[665]: E0918 21:22:28.035972     665 pod_workers.go:191] Error syncing pod a55c40ca-6e3f-4daa-907a-f52eb8fa9d41 ("storage-provisioner_kube-system(a55c40ca-6e3f-4daa-907a-f52eb8fa9d41)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a55c40ca-6e3f-4daa-907a-f52eb8fa9d41)"
	W0918 21:27:14.606859 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:30 old-k8s-version-025914 kubelet[665]: E0918 21:22:30.495050     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.609730 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:34 old-k8s-version-025914 kubelet[665]: E0918 21:22:34.683315     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:14.610505 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:45 old-k8s-version-025914 kubelet[665]: E0918 21:22:45.144685     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.610692 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:45 old-k8s-version-025914 kubelet[665]: E0918 21:22:45.670921     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.611025 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:50 old-k8s-version-025914 kubelet[665]: E0918 21:22:50.494917     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.611213 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:57 old-k8s-version-025914 kubelet[665]: E0918 21:22:57.675883     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.611805 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:07 old-k8s-version-025914 kubelet[665]: E0918 21:23:07.202210     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.611991 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:08 old-k8s-version-025914 kubelet[665]: E0918 21:23:08.671464     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.612332 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:10 old-k8s-version-025914 kubelet[665]: E0918 21:23:10.495097     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.614898 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:19 old-k8s-version-025914 kubelet[665]: E0918 21:23:19.683726     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:14.615233 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:20 old-k8s-version-025914 kubelet[665]: E0918 21:23:20.670558     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.615436 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:30 old-k8s-version-025914 kubelet[665]: E0918 21:23:30.671353     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.615776 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:35 old-k8s-version-025914 kubelet[665]: E0918 21:23:35.670703     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.615964 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:45 old-k8s-version-025914 kubelet[665]: E0918 21:23:45.670861     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.616600 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:47 old-k8s-version-025914 kubelet[665]: E0918 21:23:47.312536     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.616941 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:50 old-k8s-version-025914 kubelet[665]: E0918 21:23:50.495433     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.617131 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:58 old-k8s-version-025914 kubelet[665]: E0918 21:23:58.670952     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.617560 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:02 old-k8s-version-025914 kubelet[665]: E0918 21:24:02.670893     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.617752 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:11 old-k8s-version-025914 kubelet[665]: E0918 21:24:11.670937     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.618091 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:15 old-k8s-version-025914 kubelet[665]: E0918 21:24:15.670558     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.618284 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:22 old-k8s-version-025914 kubelet[665]: E0918 21:24:22.672129     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.618616 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:31 old-k8s-version-025914 kubelet[665]: E0918 21:24:31.670774     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.618804 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:35 old-k8s-version-025914 kubelet[665]: E0918 21:24:35.671357     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.619136 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:45 old-k8s-version-025914 kubelet[665]: E0918 21:24:45.670584     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.621620 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:46 old-k8s-version-025914 kubelet[665]: E0918 21:24:46.681946     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:14.621812 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:59 old-k8s-version-025914 kubelet[665]: E0918 21:24:59.670935     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.622143 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:00 old-k8s-version-025914 kubelet[665]: E0918 21:25:00.670943     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.622333 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:10 old-k8s-version-025914 kubelet[665]: E0918 21:25:10.677379     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.622945 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:16 old-k8s-version-025914 kubelet[665]: E0918 21:25:16.556333     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.623276 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:20 old-k8s-version-025914 kubelet[665]: E0918 21:25:20.495478     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.623464 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:24 old-k8s-version-025914 kubelet[665]: E0918 21:25:24.671063     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.623793 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:32 old-k8s-version-025914 kubelet[665]: E0918 21:25:32.671331     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.623982 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:35 old-k8s-version-025914 kubelet[665]: E0918 21:25:35.670802     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.624317 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:46 old-k8s-version-025914 kubelet[665]: E0918 21:25:46.670903     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.624511 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:46 old-k8s-version-025914 kubelet[665]: E0918 21:25:46.674707     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.624699 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:58 old-k8s-version-025914 kubelet[665]: E0918 21:25:58.670888     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.625030 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:01 old-k8s-version-025914 kubelet[665]: E0918 21:26:01.670449     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.625359 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:12 old-k8s-version-025914 kubelet[665]: E0918 21:26:12.672485     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.625547 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:12 old-k8s-version-025914 kubelet[665]: E0918 21:26:12.672863     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.625887 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:24 old-k8s-version-025914 kubelet[665]: E0918 21:26:24.671138     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.626074 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:25 old-k8s-version-025914 kubelet[665]: E0918 21:26:25.670938     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.626404 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:35 old-k8s-version-025914 kubelet[665]: E0918 21:26:35.670522     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.626591 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:37 old-k8s-version-025914 kubelet[665]: E0918 21:26:37.670947     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.626922 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:48 old-k8s-version-025914 kubelet[665]: E0918 21:26:48.671182     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.627108 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:49 old-k8s-version-025914 kubelet[665]: E0918 21:26:49.670964     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.627437 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: E0918 21:27:00.671211     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.627623 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:03 old-k8s-version-025914 kubelet[665]: E0918 21:27:03.670945     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 21:27:14.627633 1096772 logs.go:123] Gathering logs for coredns [3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e] ...
	I0918 21:27:14.627680 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e"
	I0918 21:27:14.667217 1096772 logs.go:123] Gathering logs for kindnet [db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c] ...
	I0918 21:27:14.667296 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c"
	I0918 21:27:14.721198 1096772 logs.go:123] Gathering logs for storage-provisioner [ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e] ...
	I0918 21:27:14.721230 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e"
	I0918 21:27:14.767188 1096772 logs.go:123] Gathering logs for containerd ...
	I0918 21:27:14.767217 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0918 21:27:14.830243 1096772 out.go:358] Setting ErrFile to fd 2...
	I0918 21:27:14.830276 1096772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0918 21:27:14.830354 1096772 out.go:270] X Problems detected in kubelet:
	W0918 21:27:14.830369 1096772 out.go:270]   Sep 18 21:26:37 old-k8s-version-025914 kubelet[665]: E0918 21:26:37.670947     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.830424 1096772 out.go:270]   Sep 18 21:26:48 old-k8s-version-025914 kubelet[665]: E0918 21:26:48.671182     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.830433 1096772 out.go:270]   Sep 18 21:26:49 old-k8s-version-025914 kubelet[665]: E0918 21:26:49.670964     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:14.830447 1096772 out.go:270]   Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: E0918 21:27:00.671211     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:14.830453 1096772 out.go:270]   Sep 18 21:27:03 old-k8s-version-025914 kubelet[665]: E0918 21:27:03.670945     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 21:27:14.830464 1096772 out.go:358] Setting ErrFile to fd 2...
	I0918 21:27:14.830473 1096772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:27:19.233721 1101836 system_pods.go:59] 9 kube-system pods found
	I0918 21:27:19.233763 1101836 system_pods.go:61] "coredns-7c65d6cfc9-sd2t5" [617fba9e-176a-45a3-b7e8-d4dad96b49a9] Running
	I0918 21:27:19.233772 1101836 system_pods.go:61] "etcd-no-preload-460226" [826ffbf4-a22d-4f6a-9b37-2f7ca6aa84a8] Running
	I0918 21:27:19.233777 1101836 system_pods.go:61] "kindnet-nrr8r" [509086ac-e538-4838-a759-ecb0df61a9e5] Running
	I0918 21:27:19.233804 1101836 system_pods.go:61] "kube-apiserver-no-preload-460226" [fcfae4fb-ee11-4f27-94fc-f8ca8b88db3a] Running
	I0918 21:27:19.233814 1101836 system_pods.go:61] "kube-controller-manager-no-preload-460226" [b905a6e8-386c-4a2a-8dc9-a6e1acbfc58c] Running
	I0918 21:27:19.233818 1101836 system_pods.go:61] "kube-proxy-84bl9" [3f44dde4-7692-4eea-b3c4-e53b88ae8de7] Running
	I0918 21:27:19.233822 1101836 system_pods.go:61] "kube-scheduler-no-preload-460226" [79f06f81-56b5-4baf-bef3-dcc5ee387bf0] Running
	I0918 21:27:19.233829 1101836 system_pods.go:61] "metrics-server-6867b74b74-w984l" [7e63f7df-9919-4ea6-af1b-d5619c635218] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:27:19.233839 1101836 system_pods.go:61] "storage-provisioner" [e4fd2599-cf0e-4c9f-9378-bd6b59c7a55e] Running
	I0918 21:27:19.233847 1101836 system_pods.go:74] duration metric: took 11.700031119s to wait for pod list to return data ...
	I0918 21:27:19.233857 1101836 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:27:19.236757 1101836 default_sa.go:45] found service account: "default"
	I0918 21:27:19.236790 1101836 default_sa.go:55] duration metric: took 2.926789ms for default service account to be created ...
	I0918 21:27:19.236800 1101836 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:27:19.242608 1101836 system_pods.go:86] 9 kube-system pods found
	I0918 21:27:19.242644 1101836 system_pods.go:89] "coredns-7c65d6cfc9-sd2t5" [617fba9e-176a-45a3-b7e8-d4dad96b49a9] Running
	I0918 21:27:19.242652 1101836 system_pods.go:89] "etcd-no-preload-460226" [826ffbf4-a22d-4f6a-9b37-2f7ca6aa84a8] Running
	I0918 21:27:19.242657 1101836 system_pods.go:89] "kindnet-nrr8r" [509086ac-e538-4838-a759-ecb0df61a9e5] Running
	I0918 21:27:19.242662 1101836 system_pods.go:89] "kube-apiserver-no-preload-460226" [fcfae4fb-ee11-4f27-94fc-f8ca8b88db3a] Running
	I0918 21:27:19.242668 1101836 system_pods.go:89] "kube-controller-manager-no-preload-460226" [b905a6e8-386c-4a2a-8dc9-a6e1acbfc58c] Running
	I0918 21:27:19.242672 1101836 system_pods.go:89] "kube-proxy-84bl9" [3f44dde4-7692-4eea-b3c4-e53b88ae8de7] Running
	I0918 21:27:19.242677 1101836 system_pods.go:89] "kube-scheduler-no-preload-460226" [79f06f81-56b5-4baf-bef3-dcc5ee387bf0] Running
	I0918 21:27:19.242684 1101836 system_pods.go:89] "metrics-server-6867b74b74-w984l" [7e63f7df-9919-4ea6-af1b-d5619c635218] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:27:19.242689 1101836 system_pods.go:89] "storage-provisioner" [e4fd2599-cf0e-4c9f-9378-bd6b59c7a55e] Running
	I0918 21:27:19.242697 1101836 system_pods.go:126] duration metric: took 5.891091ms to wait for k8s-apps to be running ...
	I0918 21:27:19.242709 1101836 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:27:19.242767 1101836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:27:19.254647 1101836 system_svc.go:56] duration metric: took 11.927059ms WaitForService to wait for kubelet
	I0918 21:27:19.254677 1101836 kubeadm.go:582] duration metric: took 4m41.037539504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:27:19.254706 1101836 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:27:19.258090 1101836 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0918 21:27:19.258124 1101836 node_conditions.go:123] node cpu capacity is 2
	I0918 21:27:19.258135 1101836 node_conditions.go:105] duration metric: took 3.423401ms to run NodePressure ...
	I0918 21:27:19.258147 1101836 start.go:241] waiting for startup goroutines ...
	I0918 21:27:19.258162 1101836 start.go:246] waiting for cluster config update ...
	I0918 21:27:19.258175 1101836 start.go:255] writing updated cluster config ...
	I0918 21:27:19.258515 1101836 ssh_runner.go:195] Run: rm -f paused
	I0918 21:27:19.329111 1101836 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:27:19.331594 1101836 out.go:177] * Done! kubectl is now configured to use "no-preload-460226" cluster and "default" namespace by default
	I0918 21:27:24.831742 1096772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:27:24.843745 1096772 api_server.go:72] duration metric: took 5m50.188751864s to wait for apiserver process to appear ...
	I0918 21:27:24.843777 1096772 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:27:24.843814 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:27:24.843875 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:27:24.889507 1096772 cri.go:89] found id: "e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972"
	I0918 21:27:24.889529 1096772 cri.go:89] found id: "e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f"
	I0918 21:27:24.889535 1096772 cri.go:89] found id: ""
	I0918 21:27:24.889543 1096772 logs.go:276] 2 containers: [e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972 e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f]
	I0918 21:27:24.889599 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:24.893345 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:24.896756 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0918 21:27:24.896831 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:27:24.966674 1096772 cri.go:89] found id: "bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31"
	I0918 21:27:24.966693 1096772 cri.go:89] found id: "85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155"
	I0918 21:27:24.966698 1096772 cri.go:89] found id: ""
	I0918 21:27:24.966705 1096772 logs.go:276] 2 containers: [bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31 85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155]
	I0918 21:27:24.966760 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:24.971029 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:24.975788 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0918 21:27:24.975857 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:27:25.025808 1096772 cri.go:89] found id: "3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e"
	I0918 21:27:25.025828 1096772 cri.go:89] found id: "76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77"
	I0918 21:27:25.025833 1096772 cri.go:89] found id: ""
	I0918 21:27:25.025840 1096772 logs.go:276] 2 containers: [3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e 76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77]
	I0918 21:27:25.025907 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.029877 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.033747 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:27:25.033840 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:27:25.073121 1096772 cri.go:89] found id: "5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65"
	I0918 21:27:25.073143 1096772 cri.go:89] found id: "654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3"
	I0918 21:27:25.073148 1096772 cri.go:89] found id: ""
	I0918 21:27:25.073155 1096772 logs.go:276] 2 containers: [5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65 654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3]
	I0918 21:27:25.073213 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.077015 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.080879 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:27:25.080972 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:27:25.120125 1096772 cri.go:89] found id: "97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57"
	I0918 21:27:25.120205 1096772 cri.go:89] found id: "724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890"
	I0918 21:27:25.120227 1096772 cri.go:89] found id: ""
	I0918 21:27:25.120269 1096772 logs.go:276] 2 containers: [97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57 724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890]
	I0918 21:27:25.120352 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.124126 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.127947 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:27:25.128026 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:27:25.167592 1096772 cri.go:89] found id: "6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e"
	I0918 21:27:25.167617 1096772 cri.go:89] found id: "0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca"
	I0918 21:27:25.167623 1096772 cri.go:89] found id: ""
	I0918 21:27:25.167630 1096772 logs.go:276] 2 containers: [6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e 0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca]
	I0918 21:27:25.167689 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.171303 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.174797 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0918 21:27:25.174881 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:27:25.220150 1096772 cri.go:89] found id: "d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db"
	I0918 21:27:25.220171 1096772 cri.go:89] found id: "db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c"
	I0918 21:27:25.220176 1096772 cri.go:89] found id: ""
	I0918 21:27:25.220183 1096772 logs.go:276] 2 containers: [d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c]
	I0918 21:27:25.220239 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.223726 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.227235 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:27:25.227307 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:27:25.264555 1096772 cri.go:89] found id: "ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e"
	I0918 21:27:25.264577 1096772 cri.go:89] found id: "cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f"
	I0918 21:27:25.264583 1096772 cri.go:89] found id: ""
	I0918 21:27:25.264590 1096772 logs.go:276] 2 containers: [ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f]
	I0918 21:27:25.264648 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.268036 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.271313 1096772 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:27:25.271383 1096772 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:27:25.314052 1096772 cri.go:89] found id: "d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e"
	I0918 21:27:25.314075 1096772 cri.go:89] found id: ""
	I0918 21:27:25.314083 1096772 logs.go:276] 1 containers: [d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e]
	I0918 21:27:25.314166 1096772 ssh_runner.go:195] Run: which crictl
	I0918 21:27:25.317824 1096772 logs.go:123] Gathering logs for kube-proxy [724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890] ...
	I0918 21:27:25.317851 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890"
	I0918 21:27:25.356179 1096772 logs.go:123] Gathering logs for storage-provisioner [cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f] ...
	I0918 21:27:25.356211 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f"
	I0918 21:27:25.392458 1096772 logs.go:123] Gathering logs for container status ...
	I0918 21:27:25.392487 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:27:25.448007 1096772 logs.go:123] Gathering logs for dmesg ...
	I0918 21:27:25.448048 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:27:25.465593 1096772 logs.go:123] Gathering logs for kube-apiserver [e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f] ...
	I0918 21:27:25.465665 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f"
	I0918 21:27:25.540704 1096772 logs.go:123] Gathering logs for kindnet [db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c] ...
	I0918 21:27:25.540738 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c"
	I0918 21:27:25.582760 1096772 logs.go:123] Gathering logs for kube-apiserver [e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972] ...
	I0918 21:27:25.582787 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972"
	I0918 21:27:25.651789 1096772 logs.go:123] Gathering logs for etcd [bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31] ...
	I0918 21:27:25.651823 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31"
	I0918 21:27:25.698093 1096772 logs.go:123] Gathering logs for kube-controller-manager [0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca] ...
	I0918 21:27:25.698142 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca"
	I0918 21:27:25.774902 1096772 logs.go:123] Gathering logs for kubernetes-dashboard [d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e] ...
	I0918 21:27:25.774938 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e"
	I0918 21:27:25.824126 1096772 logs.go:123] Gathering logs for containerd ...
	I0918 21:27:25.824155 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0918 21:27:25.883310 1096772 logs.go:123] Gathering logs for kube-scheduler [654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3] ...
	I0918 21:27:25.883344 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3"
	I0918 21:27:25.943284 1096772 logs.go:123] Gathering logs for kube-controller-manager [6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e] ...
	I0918 21:27:25.943316 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e"
	I0918 21:27:26.006550 1096772 logs.go:123] Gathering logs for etcd [85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155] ...
	I0918 21:27:26.006599 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155"
	I0918 21:27:26.062095 1096772 logs.go:123] Gathering logs for coredns [3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e] ...
	I0918 21:27:26.062126 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e"
	I0918 21:27:26.106172 1096772 logs.go:123] Gathering logs for coredns [76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77] ...
	I0918 21:27:26.106202 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77"
	I0918 21:27:26.145634 1096772 logs.go:123] Gathering logs for kube-scheduler [5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65] ...
	I0918 21:27:26.145706 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65"
	I0918 21:27:26.186822 1096772 logs.go:123] Gathering logs for kube-proxy [97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57] ...
	I0918 21:27:26.186853 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57"
	I0918 21:27:26.237319 1096772 logs.go:123] Gathering logs for kindnet [d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db] ...
	I0918 21:27:26.237346 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db"
	I0918 21:27:26.302566 1096772 logs.go:123] Gathering logs for kubelet ...
	I0918 21:27:26.302596 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:27:26.370931 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:53 old-k8s-version-025914 kubelet[665]: E0918 21:21:53.794983     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-rqmbg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-rqmbg" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.371224 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:53 old-k8s-version-025914 kubelet[665]: E0918 21:21:53.796426     665 reflector.go:138] object-"kube-system"/"kindnet-token-xbssb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xbssb" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.375331 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022067     665 reflector.go:138] object-"kube-system"/"metrics-server-token-9b79x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9b79x" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.375546 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022444     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.375747 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022526     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.375974 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022568     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-n2hmt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-n2hmt" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.376196 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022715     665 reflector.go:138] object-"default"/"default-token-65brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-65brt" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.376409 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:54 old-k8s-version-025914 kubelet[665]: E0918 21:21:54.022772     665 reflector.go:138] object-"kube-system"/"coredns-token-jl4pr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jl4pr" is forbidden: User "system:node:old-k8s-version-025914" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-025914' and this object
	W0918 21:27:26.384238 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:55 old-k8s-version-025914 kubelet[665]: E0918 21:21:55.759054     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:26.384501 1096772 logs.go:138] Found kubelet problem: Sep 18 21:21:55 old-k8s-version-025914 kubelet[665]: E0918 21:21:55.869815     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.389366 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:06 old-k8s-version-025914 kubelet[665]: E0918 21:22:06.685684     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:26.391096 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:19 old-k8s-version-025914 kubelet[665]: E0918 21:22:19.677495     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.392101 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:24 old-k8s-version-025914 kubelet[665]: E0918 21:22:24.020367     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.392556 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:25 old-k8s-version-025914 kubelet[665]: E0918 21:22:25.025956     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.393045 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:28 old-k8s-version-025914 kubelet[665]: E0918 21:22:28.035972     665 pod_workers.go:191] Error syncing pod a55c40ca-6e3f-4daa-907a-f52eb8fa9d41 ("storage-provisioner_kube-system(a55c40ca-6e3f-4daa-907a-f52eb8fa9d41)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a55c40ca-6e3f-4daa-907a-f52eb8fa9d41)"
	W0918 21:27:26.393468 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:30 old-k8s-version-025914 kubelet[665]: E0918 21:22:30.495050     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.396385 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:34 old-k8s-version-025914 kubelet[665]: E0918 21:22:34.683315     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:26.397163 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:45 old-k8s-version-025914 kubelet[665]: E0918 21:22:45.144685     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.397375 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:45 old-k8s-version-025914 kubelet[665]: E0918 21:22:45.670921     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.397735 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:50 old-k8s-version-025914 kubelet[665]: E0918 21:22:50.494917     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.397946 1096772 logs.go:138] Found kubelet problem: Sep 18 21:22:57 old-k8s-version-025914 kubelet[665]: E0918 21:22:57.675883     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.398560 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:07 old-k8s-version-025914 kubelet[665]: E0918 21:23:07.202210     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.398771 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:08 old-k8s-version-025914 kubelet[665]: E0918 21:23:08.671464     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.399128 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:10 old-k8s-version-025914 kubelet[665]: E0918 21:23:10.495097     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.401614 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:19 old-k8s-version-025914 kubelet[665]: E0918 21:23:19.683726     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:26.401974 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:20 old-k8s-version-025914 kubelet[665]: E0918 21:23:20.670558     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.402186 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:30 old-k8s-version-025914 kubelet[665]: E0918 21:23:30.671353     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.402546 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:35 old-k8s-version-025914 kubelet[665]: E0918 21:23:35.670703     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.402763 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:45 old-k8s-version-025914 kubelet[665]: E0918 21:23:45.670861     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.403379 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:47 old-k8s-version-025914 kubelet[665]: E0918 21:23:47.312536     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.403738 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:50 old-k8s-version-025914 kubelet[665]: E0918 21:23:50.495433     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.403949 1096772 logs.go:138] Found kubelet problem: Sep 18 21:23:58 old-k8s-version-025914 kubelet[665]: E0918 21:23:58.670952     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.404310 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:02 old-k8s-version-025914 kubelet[665]: E0918 21:24:02.670893     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.404538 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:11 old-k8s-version-025914 kubelet[665]: E0918 21:24:11.670937     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.404891 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:15 old-k8s-version-025914 kubelet[665]: E0918 21:24:15.670558     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.405103 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:22 old-k8s-version-025914 kubelet[665]: E0918 21:24:22.672129     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.405478 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:31 old-k8s-version-025914 kubelet[665]: E0918 21:24:31.670774     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.405696 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:35 old-k8s-version-025914 kubelet[665]: E0918 21:24:35.671357     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.406054 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:45 old-k8s-version-025914 kubelet[665]: E0918 21:24:45.670584     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.408562 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:46 old-k8s-version-025914 kubelet[665]: E0918 21:24:46.681946     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0918 21:27:26.408792 1096772 logs.go:138] Found kubelet problem: Sep 18 21:24:59 old-k8s-version-025914 kubelet[665]: E0918 21:24:59.670935     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.409167 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:00 old-k8s-version-025914 kubelet[665]: E0918 21:25:00.670943     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.409379 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:10 old-k8s-version-025914 kubelet[665]: E0918 21:25:10.677379     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.409998 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:16 old-k8s-version-025914 kubelet[665]: E0918 21:25:16.556333     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.410365 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:20 old-k8s-version-025914 kubelet[665]: E0918 21:25:20.495478     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.410581 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:24 old-k8s-version-025914 kubelet[665]: E0918 21:25:24.671063     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.410935 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:32 old-k8s-version-025914 kubelet[665]: E0918 21:25:32.671331     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.411163 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:35 old-k8s-version-025914 kubelet[665]: E0918 21:25:35.670802     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.411520 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:46 old-k8s-version-025914 kubelet[665]: E0918 21:25:46.670903     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.411732 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:46 old-k8s-version-025914 kubelet[665]: E0918 21:25:46.674707     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.411944 1096772 logs.go:138] Found kubelet problem: Sep 18 21:25:58 old-k8s-version-025914 kubelet[665]: E0918 21:25:58.670888     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.412310 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:01 old-k8s-version-025914 kubelet[665]: E0918 21:26:01.670449     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.412670 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:12 old-k8s-version-025914 kubelet[665]: E0918 21:26:12.672485     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.412880 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:12 old-k8s-version-025914 kubelet[665]: E0918 21:26:12.672863     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.413263 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:24 old-k8s-version-025914 kubelet[665]: E0918 21:26:24.671138     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.413473 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:25 old-k8s-version-025914 kubelet[665]: E0918 21:26:25.670938     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.413828 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:35 old-k8s-version-025914 kubelet[665]: E0918 21:26:35.670522     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.414073 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:37 old-k8s-version-025914 kubelet[665]: E0918 21:26:37.670947     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.414471 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:48 old-k8s-version-025914 kubelet[665]: E0918 21:26:48.671182     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.414732 1096772 logs.go:138] Found kubelet problem: Sep 18 21:26:49 old-k8s-version-025914 kubelet[665]: E0918 21:26:49.670964     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.415107 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: E0918 21:27:00.671211     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.415336 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:03 old-k8s-version-025914 kubelet[665]: E0918 21:27:03.670945     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.415692 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:14 old-k8s-version-025914 kubelet[665]: E0918 21:27:14.674590     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.415902 1096772 logs.go:138] Found kubelet problem: Sep 18 21:27:15 old-k8s-version-025914 kubelet[665]: E0918 21:27:15.670877     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 21:27:26.415929 1096772 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:27:26.415962 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:27:26.568040 1096772 logs.go:123] Gathering logs for storage-provisioner [ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e] ...
	I0918 21:27:26.568072 1096772 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e"
	I0918 21:27:26.609865 1096772 out.go:358] Setting ErrFile to fd 2...
	I0918 21:27:26.609892 1096772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0918 21:27:26.609939 1096772 out.go:270] X Problems detected in kubelet:
	W0918 21:27:26.609954 1096772 out.go:270]   Sep 18 21:26:49 old-k8s-version-025914 kubelet[665]: E0918 21:26:49.670964     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.609962 1096772 out.go:270]   Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: E0918 21:27:00.671211     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.609970 1096772 out.go:270]   Sep 18 21:27:03 old-k8s-version-025914 kubelet[665]: E0918 21:27:03.670945     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 21:27:26.609979 1096772 out.go:270]   Sep 18 21:27:14 old-k8s-version-025914 kubelet[665]: E0918 21:27:14.674590     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	W0918 21:27:26.609991 1096772 out.go:270]   Sep 18 21:27:15 old-k8s-version-025914 kubelet[665]: E0918 21:27:15.670877     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 21:27:26.609996 1096772 out.go:358] Setting ErrFile to fd 2...
	I0918 21:27:26.610002 1096772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:27:36.611419 1096772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0918 21:27:36.628302 1096772 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0918 21:27:36.640948 1096772 out.go:201] 
	W0918 21:27:36.644770 1096772 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0918 21:27:36.644810 1096772 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0918 21:27:36.644833 1096772 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0918 21:27:36.644841 1096772 out.go:270] * 
	W0918 21:27:36.645761 1096772 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 21:27:36.649040 1096772 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	42a866a835daf       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   35a8abf2c7c6e       dashboard-metrics-scraper-8d5bb5db8-fg9g9
	ad203f2966e9c       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   521e2f3cbf883       storage-provisioner
	d619b53ff6371       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   8f66c8e4cfbb4       kubernetes-dashboard-cd95d586-dknpf
	3d20dac7d7681       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   04a5ea739c535       coredns-74ff55c5b-8jxxt
	cf7bfcff7e760       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   521e2f3cbf883       storage-provisioner
	97f0a0cb90df1       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   c5b855eee0bae       kube-proxy-gtz6t
	d504eaa19258b       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   0ab2adb36ac1c       kindnet-lj4hg
	6dd928374966a       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   f9c4efa8c491e       busybox
	6b43228024512       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   b34581316797a       kube-controller-manager-old-k8s-version-025914
	5d51ba1c2f38f       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   c4609602e6ed8       kube-scheduler-old-k8s-version-025914
	bc6a7d0aa408d       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   5ef4ba6491c8e       etcd-old-k8s-version-025914
	e2b1cd6e3e8ea       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   acdeb6123d8d0       kube-apiserver-old-k8s-version-025914
	dd59526292467       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   6c8826dde3e07       busybox
	76e4293b74987       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   95171a3a02b17       coredns-74ff55c5b-8jxxt
	db7d1204f54e4       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   dfd0b86f2405c       kindnet-lj4hg
	724fabe3bfc0d       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   3f910bce73ae9       kube-proxy-gtz6t
	654405d307882       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   2f0f3aef92f03       kube-scheduler-old-k8s-version-025914
	0c3a88d421567       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   c52eb5a6c38cb       kube-controller-manager-old-k8s-version-025914
	e10be7ceb6023       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   cc217ff4f1db5       kube-apiserver-old-k8s-version-025914
	8581897275347       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   6059867ef81f9       etcd-old-k8s-version-025914
	
	
	==> containerd <==
	Sep 18 21:23:46 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:23:46.713480976Z" level=info msg="CreateContainer within sandbox \"35a8abf2c7c6e3bd9ea82e8e35c351b7c5b8342d54f0c4c5194ef4cdc67f22f2\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"5a5d2def2ae317d2990f1eca722b1130ddb7559e7e84922bf7af2a862648b296\""
	Sep 18 21:23:46 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:23:46.714694321Z" level=info msg="StartContainer for \"5a5d2def2ae317d2990f1eca722b1130ddb7559e7e84922bf7af2a862648b296\""
	Sep 18 21:23:46 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:23:46.785719068Z" level=info msg="StartContainer for \"5a5d2def2ae317d2990f1eca722b1130ddb7559e7e84922bf7af2a862648b296\" returns successfully"
	Sep 18 21:23:46 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:23:46.812016961Z" level=info msg="shim disconnected" id=5a5d2def2ae317d2990f1eca722b1130ddb7559e7e84922bf7af2a862648b296 namespace=k8s.io
	Sep 18 21:23:46 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:23:46.812252472Z" level=warning msg="cleaning up after shim disconnected" id=5a5d2def2ae317d2990f1eca722b1130ddb7559e7e84922bf7af2a862648b296 namespace=k8s.io
	Sep 18 21:23:46 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:23:46.812278819Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 18 21:23:47 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:23:47.316386115Z" level=info msg="RemoveContainer for \"31280639b30d45f23fc2f020a55a7e12741085c67f6af12608422a615447dda2\""
	Sep 18 21:23:47 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:23:47.330645835Z" level=info msg="RemoveContainer for \"31280639b30d45f23fc2f020a55a7e12741085c67f6af12608422a615447dda2\" returns successfully"
	Sep 18 21:24:46 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:24:46.672875624Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 21:24:46 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:24:46.679267386Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 18 21:24:46 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:24:46.680977566Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 18 21:24:46 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:24:46.681124200Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 18 21:25:15 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:25:15.673754076Z" level=info msg="CreateContainer within sandbox \"35a8abf2c7c6e3bd9ea82e8e35c351b7c5b8342d54f0c4c5194ef4cdc67f22f2\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 18 21:25:15 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:25:15.689940550Z" level=info msg="CreateContainer within sandbox \"35a8abf2c7c6e3bd9ea82e8e35c351b7c5b8342d54f0c4c5194ef4cdc67f22f2\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629\""
	Sep 18 21:25:15 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:25:15.690516276Z" level=info msg="StartContainer for \"42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629\""
	Sep 18 21:25:15 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:25:15.754134415Z" level=info msg="StartContainer for \"42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629\" returns successfully"
	Sep 18 21:25:15 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:25:15.787713043Z" level=info msg="shim disconnected" id=42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629 namespace=k8s.io
	Sep 18 21:25:15 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:25:15.787873855Z" level=warning msg="cleaning up after shim disconnected" id=42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629 namespace=k8s.io
	Sep 18 21:25:15 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:25:15.787958400Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 18 21:25:16 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:25:16.555838454Z" level=info msg="RemoveContainer for \"5a5d2def2ae317d2990f1eca722b1130ddb7559e7e84922bf7af2a862648b296\""
	Sep 18 21:25:16 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:25:16.564744372Z" level=info msg="RemoveContainer for \"5a5d2def2ae317d2990f1eca722b1130ddb7559e7e84922bf7af2a862648b296\" returns successfully"
	Sep 18 21:27:27 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:27:27.671383723Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 21:27:27 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:27:27.697958453Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 18 21:27:27 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:27:27.699682465Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 18 21:27:27 old-k8s-version-025914 containerd[570]: time="2024-09-18T21:27:27.699775823Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [3d20dac7d76814e241e80426ce16df1e7c3a6d9b367fd1dd6c069ea113f09f4e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:40978 - 13656 "HINFO IN 3193991534537310039.3831583518034112134. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016149271s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0918 21:22:28.115031       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-18 21:21:58.114365507 +0000 UTC m=+0.036106078) (total time: 30.00051066s):
	Trace[2019727887]: [30.00051066s] [30.00051066s] END
	E0918 21:22:28.115076       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0918 21:22:28.115304       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-18 21:21:58.114851643 +0000 UTC m=+0.036592197) (total time: 30.000418188s):
	Trace[939984059]: [30.000418188s] [30.000418188s] END
	E0918 21:22:28.115402       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0918 21:22:28.115550       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-18 21:21:58.115034764 +0000 UTC m=+0.036775326) (total time: 30.000503103s):
	Trace[911902081]: [30.000503103s] [30.000503103s] END
	E0918 21:22:28.115654       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [76e4293b749871c6357bfa8472bba4b46e413d704e26a96f7752ad8fc765db77] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:54815 - 44526 "HINFO IN 4285811840323565962.8621404941730733692. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021703127s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-025914
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-025914
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=old-k8s-version-025914
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T21_19_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 21:19:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-025914
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 21:27:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 21:22:44 +0000   Wed, 18 Sep 2024 21:19:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 21:22:44 +0000   Wed, 18 Sep 2024 21:19:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 21:22:44 +0000   Wed, 18 Sep 2024 21:19:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 21:22:44 +0000   Wed, 18 Sep 2024 21:19:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-025914
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 1341f9e4895244f8b36d26c6342eb11a
	  System UUID:                13790f65-a949-4afd-a28f-7fb88968f6cd
	  Boot ID:                    3a935d26-70f7-413a-bfb9-48f0fb4fad17
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 coredns-74ff55c5b-8jxxt                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m12s
	  kube-system                 etcd-old-k8s-version-025914                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m19s
	  kube-system                 kindnet-lj4hg                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m12s
	  kube-system                 kube-apiserver-old-k8s-version-025914             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-controller-manager-old-k8s-version-025914    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-proxy-gtz6t                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-scheduler-old-k8s-version-025914             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 metrics-server-9975d5f86-vgp87                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m26s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-fg9g9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-dknpf               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m38s (x4 over 8m38s)  kubelet     Node old-k8s-version-025914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m38s (x4 over 8m38s)  kubelet     Node old-k8s-version-025914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m38s (x5 over 8m38s)  kubelet     Node old-k8s-version-025914 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m19s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m19s                  kubelet     Node old-k8s-version-025914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m19s                  kubelet     Node old-k8s-version-025914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m19s                  kubelet     Node old-k8s-version-025914 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m12s                  kubelet     Node old-k8s-version-025914 status is now: NodeReady
	  Normal  Starting                 8m11s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m56s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x7 over 5m56s)  kubelet     Node old-k8s-version-025914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-025914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-025914 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m40s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Sep18 21:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000011] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [85818972753477a8d1fef6825f3dbb234958e5902798d8c1ba087a5ca6d5c155] <==
	raft2024/09/18 21:19:02 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/09/18 21:19:02 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/09/18 21:19:02 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/09/18 21:19:02 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/09/18 21:19:02 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-09-18 21:19:02.458632 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-18 21:19:02.459119 I | etcdserver: published {Name:old-k8s-version-025914 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-09-18 21:19:02.459338 I | embed: ready to serve client requests
	2024-09-18 21:19:02.460893 I | embed: serving client requests on 192.168.85.2:2379
	2024-09-18 21:19:02.468180 I | embed: ready to serve client requests
	2024-09-18 21:19:02.469519 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-18 21:19:02.470004 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-18 21:19:02.470692 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-18 21:19:26.998922 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:19:28.944835 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:19:38.944696 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:19:48.944802 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:19:58.944622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:20:08.944628 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:20:18.944726 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:20:28.944828 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:20:38.944848 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:20:48.944735 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:20:58.945265 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:21:08.944875 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [bc6a7d0aa408d60cc20ea128762917c74839f483764155d9cc13c2315a995d31] <==
	2024-09-18 21:23:36.085569 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:23:46.085543 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:23:56.085711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:24:06.085524 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:24:16.085807 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:24:26.085567 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:24:36.085783 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:24:46.085714 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:24:56.085677 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:25:06.085642 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:25:16.085700 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:25:26.085752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:25:36.085762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:25:46.085610 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:25:56.085608 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:26:06.085625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:26:16.085646 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:26:26.085646 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:26:36.085934 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:26:46.085537 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:26:56.085949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:27:06.085646 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:27:16.085627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:27:26.085824 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 21:27:36.085646 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 21:27:39 up  5:10,  0 users,  load average: 0.49, 1.62, 2.43
	Linux old-k8s-version-025914 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d504eaa19258b21dfd24b9de205612930479307b03b43d064e5250ca98c746db] <==
	I0918 21:25:37.019916       1 main.go:299] handling current node
	I0918 21:25:47.022643       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:25:47.022677       1 main.go:299] handling current node
	I0918 21:25:57.014724       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:25:57.014763       1 main.go:299] handling current node
	I0918 21:26:07.020184       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:26:07.020226       1 main.go:299] handling current node
	I0918 21:26:17.022782       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:26:17.022818       1 main.go:299] handling current node
	I0918 21:26:27.022592       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:26:27.022834       1 main.go:299] handling current node
	I0918 21:26:37.019446       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:26:37.019492       1 main.go:299] handling current node
	I0918 21:26:47.023192       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:26:47.023232       1 main.go:299] handling current node
	I0918 21:26:57.013848       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:26:57.013882       1 main.go:299] handling current node
	I0918 21:27:07.019715       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:27:07.019750       1 main.go:299] handling current node
	I0918 21:27:17.023491       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:27:17.023529       1 main.go:299] handling current node
	I0918 21:27:27.021999       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:27:27.022045       1 main.go:299] handling current node
	I0918 21:27:37.019232       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:27:37.019278       1 main.go:299] handling current node
	
	
	==> kindnet [db7d1204f54e44f975686145cb87687c241ba984988181677533f7f92550bf1c] <==
	I0918 21:19:30.413783       1 controller.go:338] Waiting for informer caches to sync
	I0918 21:19:30.413838       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0918 21:19:30.514542       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0918 21:19:30.514770       1 metrics.go:61] Registering metrics
	I0918 21:19:30.514934       1 controller.go:374] Syncing nftables rules
	I0918 21:19:40.416682       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:19:40.416825       1 main.go:299] handling current node
	I0918 21:19:50.414293       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:19:50.414330       1 main.go:299] handling current node
	I0918 21:20:00.421246       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:20:00.421356       1 main.go:299] handling current node
	I0918 21:20:10.420169       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:20:10.420204       1 main.go:299] handling current node
	I0918 21:20:20.421213       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:20:20.421247       1 main.go:299] handling current node
	I0918 21:20:30.413552       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:20:30.413585       1 main.go:299] handling current node
	I0918 21:20:40.420148       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:20:40.420181       1 main.go:299] handling current node
	I0918 21:20:50.415485       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:20:50.415517       1 main.go:299] handling current node
	I0918 21:21:00.417064       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:21:00.417119       1 main.go:299] handling current node
	I0918 21:21:10.420143       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0918 21:21:10.420185       1 main.go:299] handling current node
	
	
	==> kube-apiserver [e10be7ceb6023e84ca9e9c7a82c9b89cd1df872607ec169d3564a2ffe8a3b10f] <==
	I0918 21:19:08.968368       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0918 21:19:08.972072       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0918 21:19:08.972140       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0918 21:19:09.522407       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 21:19:09.572754       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0918 21:19:09.686484       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0918 21:19:09.689693       1 controller.go:606] quota admission added evaluator for: endpoints
	I0918 21:19:09.694265       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 21:19:10.615966       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0918 21:19:11.087165       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0918 21:19:11.165346       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0918 21:19:19.587565       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 21:19:26.549961       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0918 21:19:26.699340       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0918 21:19:44.106184       1 client.go:360] parsed scheme: "passthrough"
	I0918 21:19:44.106233       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 21:19:44.106242       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0918 21:20:29.100183       1 client.go:360] parsed scheme: "passthrough"
	I0918 21:20:29.100229       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 21:20:29.100238       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0918 21:21:03.435625       1 client.go:360] parsed scheme: "passthrough"
	I0918 21:21:03.435708       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 21:21:03.435833       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0918 21:21:11.491409       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	E0918 21:21:11.519147       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-apiserver [e2b1cd6e3e8ea2b3339ccc984555b336fdfa5ebdb9befc0484a3c80853ec2972] <==
	I0918 21:24:17.821023       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 21:24:17.821052       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0918 21:24:56.945713       1 handler_proxy.go:102] no RequestInfo found in the context
	E0918 21:24:56.945793       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0918 21:24:56.945926       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:25:00.306332       1 client.go:360] parsed scheme: "passthrough"
	I0918 21:25:00.306392       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 21:25:00.306407       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0918 21:25:34.423567       1 client.go:360] parsed scheme: "passthrough"
	I0918 21:25:34.423613       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 21:25:34.423623       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0918 21:26:14.754083       1 client.go:360] parsed scheme: "passthrough"
	I0918 21:26:14.754131       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 21:26:14.754141       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0918 21:26:51.218694       1 client.go:360] parsed scheme: "passthrough"
	I0918 21:26:51.218742       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 21:26:51.218751       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0918 21:26:54.964971       1 handler_proxy.go:102] no RequestInfo found in the context
	E0918 21:26:54.965053       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0918 21:26:54.965066       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:27:33.062618       1 client.go:360] parsed scheme: "passthrough"
	I0918 21:27:33.062673       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 21:27:33.062683       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [0c3a88d4215676cff10504108bd6d06a28201b12c10be0540b2a1f42b8759bca] <==
	I0918 21:19:26.644266       1 shared_informer.go:247] Caches are synced for disruption 
	I0918 21:19:26.644297       1 disruption.go:339] Sending events to api server.
	I0918 21:19:26.644614       1 range_allocator.go:373] Set node old-k8s-version-025914 PodCIDR to [10.244.0.0/24]
	E0918 21:19:26.653179       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I0918 21:19:26.670398       1 shared_informer.go:247] Caches are synced for attach detach 
	I0918 21:19:26.704588       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-025914" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0918 21:19:26.717197       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-2gqn8"
	I0918 21:19:26.752508       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-025914" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0918 21:19:26.765679       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-8jxxt"
	E0918 21:19:26.777895       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0918 21:19:26.786144       1 shared_informer.go:247] Caches are synced for resource quota 
	I0918 21:19:26.818800       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gtz6t"
	I0918 21:19:26.818838       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lj4hg"
	I0918 21:19:26.821472       1 event.go:291] "Event occurred" object="kube-dns" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kube-system/kube-dns: endpoints \"kube-dns\" already exists"
	I0918 21:19:26.887857       1 request.go:655] Throttling request took 1.002927951s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v2beta2?timeout=32s
	I0918 21:19:26.989959       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0918 21:19:27.192626       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0918 21:19:27.284002       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0918 21:19:27.284037       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0918 21:19:27.735490       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0918 21:19:27.735546       1 shared_informer.go:247] Caches are synced for resource quota 
	I0918 21:19:28.705560       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0918 21:19:28.765203       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-2gqn8"
	I0918 21:19:31.634550       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0918 21:21:11.230492       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-controller-manager [6b432280245128417f43db90e1b1b7b5edc2175f736c2007cb36c350005b8d6e] <==
	W0918 21:23:17.877066       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 21:23:43.889595       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 21:23:49.527466       1 request.go:655] Throttling request took 1.048517914s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0918 21:23:50.379075       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 21:24:14.391521       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 21:24:22.029613       1 request.go:655] Throttling request took 1.048236818s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0918 21:24:22.881376       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 21:24:44.895973       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 21:24:54.531885       1 request.go:655] Throttling request took 1.048654664s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0918 21:24:55.383480       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 21:25:15.398057       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 21:25:27.033884       1 request.go:655] Throttling request took 1.048463185s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0918 21:25:27.885168       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 21:25:45.899877       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 21:25:59.535703       1 request.go:655] Throttling request took 1.048416389s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0918 21:26:00.387829       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 21:26:16.401773       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 21:26:32.038610       1 request.go:655] Throttling request took 1.04832805s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0918 21:26:32.890091       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 21:26:46.903551       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 21:27:04.540503       1 request.go:655] Throttling request took 1.048365038s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0918 21:27:05.392008       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 21:27:17.405413       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 21:27:37.042481       1 request.go:655] Throttling request took 1.048501969s, request: GET:https://192.168.85.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W0918 21:27:37.894070       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [724fabe3bfc0d4d753b3c57ec909eefecb538362498548603ad975ca50b4e890] <==
	I0918 21:19:27.913145       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0918 21:19:27.913278       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0918 21:19:27.958235       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0918 21:19:27.958335       1 server_others.go:185] Using iptables Proxier.
	I0918 21:19:27.962019       1 server.go:650] Version: v1.20.0
	I0918 21:19:27.963446       1 config.go:315] Starting service config controller
	I0918 21:19:27.963480       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0918 21:19:27.963637       1 config.go:224] Starting endpoint slice config controller
	I0918 21:19:27.963648       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0918 21:19:28.064303       1 shared_informer.go:247] Caches are synced for service config 
	I0918 21:19:28.068485       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [97f0a0cb90df1f7a3f424eae191f498fc4f8902ff5fe34c17a59096879659a57] <==
	I0918 21:21:58.135330       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0918 21:21:58.135416       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0918 21:21:58.154799       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0918 21:21:58.154896       1 server_others.go:185] Using iptables Proxier.
	I0918 21:21:58.155178       1 server.go:650] Version: v1.20.0
	I0918 21:21:58.155881       1 config.go:315] Starting service config controller
	I0918 21:21:58.155998       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0918 21:21:58.156166       1 config.go:224] Starting endpoint slice config controller
	I0918 21:21:58.156244       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0918 21:21:58.256274       1 shared_informer.go:247] Caches are synced for service config 
	I0918 21:21:58.256330       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [5d51ba1c2f38fd4d06104ce4f5c10bf7c8ba6f3b7ecbd7b8737dcb744f59ab65] <==
	I0918 21:21:47.411503       1 serving.go:331] Generated self-signed cert in-memory
	W0918 21:21:53.783817       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 21:21:53.783859       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 21:21:53.783868       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 21:21:53.783873       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 21:21:54.117018       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0918 21:21:54.117123       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 21:21:54.117136       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 21:21:54.117149       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0918 21:21:54.239386       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [654405d3078822d518f108e0e0f4ce918168f49c8f224dc7c0ab9e31851e3fc3] <==
	W0918 21:19:08.113316       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 21:19:08.113322       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 21:19:08.236138       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0918 21:19:08.236882       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 21:19:08.236946       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 21:19:08.236981       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0918 21:19:08.267886       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 21:19:08.268440       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 21:19:08.268670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 21:19:08.268911       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 21:19:08.269190       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 21:19:08.269383       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 21:19:08.269723       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 21:19:08.269944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 21:19:08.270210       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 21:19:08.270428       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 21:19:08.270550       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 21:19:08.270676       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 21:19:09.105938       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 21:19:09.182317       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 21:19:09.228701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 21:19:09.260468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 21:19:09.304538       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 21:19:09.350436       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0918 21:19:11.437087       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 18 21:26:01 old-k8s-version-025914 kubelet[665]: E0918 21:26:01.670449     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	Sep 18 21:26:12 old-k8s-version-025914 kubelet[665]: I0918 21:26:12.671179     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629
	Sep 18 21:26:12 old-k8s-version-025914 kubelet[665]: E0918 21:26:12.672485     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	Sep 18 21:26:12 old-k8s-version-025914 kubelet[665]: E0918 21:26:12.672863     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 21:26:24 old-k8s-version-025914 kubelet[665]: I0918 21:26:24.670288     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629
	Sep 18 21:26:24 old-k8s-version-025914 kubelet[665]: E0918 21:26:24.671138     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	Sep 18 21:26:25 old-k8s-version-025914 kubelet[665]: E0918 21:26:25.670938     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 21:26:35 old-k8s-version-025914 kubelet[665]: I0918 21:26:35.670177     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629
	Sep 18 21:26:35 old-k8s-version-025914 kubelet[665]: E0918 21:26:35.670522     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	Sep 18 21:26:37 old-k8s-version-025914 kubelet[665]: E0918 21:26:37.670947     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 21:26:48 old-k8s-version-025914 kubelet[665]: I0918 21:26:48.670891     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629
	Sep 18 21:26:48 old-k8s-version-025914 kubelet[665]: E0918 21:26:48.671182     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	Sep 18 21:26:49 old-k8s-version-025914 kubelet[665]: E0918 21:26:49.670964     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: I0918 21:27:00.670322     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629
	Sep 18 21:27:00 old-k8s-version-025914 kubelet[665]: E0918 21:27:00.671211     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	Sep 18 21:27:03 old-k8s-version-025914 kubelet[665]: E0918 21:27:03.670945     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 21:27:14 old-k8s-version-025914 kubelet[665]: I0918 21:27:14.674284     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629
	Sep 18 21:27:14 old-k8s-version-025914 kubelet[665]: E0918 21:27:14.674590     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	Sep 18 21:27:15 old-k8s-version-025914 kubelet[665]: E0918 21:27:15.670877     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 21:27:27 old-k8s-version-025914 kubelet[665]: E0918 21:27:27.699956     665 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 18 21:27:27 old-k8s-version-025914 kubelet[665]: E0918 21:27:27.700016     665 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 18 21:27:27 old-k8s-version-025914 kubelet[665]: E0918 21:27:27.700554     665 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-9b79x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-vgp87_kube-system(5427cd1
3-ba5f-4bee-b70d-c1f5769460d5): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 18 21:27:27 old-k8s-version-025914 kubelet[665]: E0918 21:27:27.700598     665 pod_workers.go:191] Error syncing pod 5427cd13-ba5f-4bee-b70d-c1f5769460d5 ("metrics-server-9975d5f86-vgp87_kube-system(5427cd13-ba5f-4bee-b70d-c1f5769460d5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 18 21:27:29 old-k8s-version-025914 kubelet[665]: I0918 21:27:29.670198     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 42a866a835dafddb5a9f6e8d01555a47933bbdc6376cfd090629bdd915a26629
	Sep 18 21:27:29 old-k8s-version-025914 kubelet[665]: E0918 21:27:29.670572     665 pod_workers.go:191] Error syncing pod c1853d95-6a25-4aa7-878e-424c3f76eb9f ("dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fg9g9_kubernetes-dashboard(c1853d95-6a25-4aa7-878e-424c3f76eb9f)"
	
	
	==> kubernetes-dashboard [d619b53ff6371edf8204b2e924807efa16170bbcd9e5c7ee31b0271bd6bf271e] <==
	2024/09/18 21:22:17 Using namespace: kubernetes-dashboard
	2024/09/18 21:22:17 Using in-cluster config to connect to apiserver
	2024/09/18 21:22:17 Using secret token for csrf signing
	2024/09/18 21:22:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/18 21:22:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/18 21:22:17 Successful initial request to the apiserver, version: v1.20.0
	2024/09/18 21:22:17 Generating JWE encryption key
	2024/09/18 21:22:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/18 21:22:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/18 21:22:17 Initializing JWE encryption key from synchronized object
	2024/09/18 21:22:17 Creating in-cluster Sidecar client
	2024/09/18 21:22:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 21:22:18 Serving insecurely on HTTP port: 9090
	2024/09/18 21:22:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 21:23:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 21:23:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 21:24:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 21:24:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 21:25:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 21:25:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 21:26:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 21:26:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 21:27:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 21:22:17 Starting overwatch
	
	
	==> storage-provisioner [ad203f2966e9ca22205cc7abd7c9bead7adaa52f290927bbd44b374df60a0b4e] <==
	I0918 21:22:44.140698       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 21:22:44.177178       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 21:22:44.177224       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 21:23:01.648787       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 21:23:01.649221       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-025914_f5a72de5-ad75-4a63-8a57-a2c365a9f3be!
	I0918 21:23:01.649855       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56975739-00bd-419b-8c96-ca23b05fddc4", APIVersion:"v1", ResourceVersion:"862", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-025914_f5a72de5-ad75-4a63-8a57-a2c365a9f3be became leader
	I0918 21:23:01.750339       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-025914_f5a72de5-ad75-4a63-8a57-a2c365a9f3be!
	
	
	==> storage-provisioner [cf7bfcff7e7609d25ac14c4ef9ca2029f1de6779594e61d861fff19dde9f6e7f] <==
	I0918 21:21:57.965584       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0918 21:22:27.969139       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-025914 -n old-k8s-version-025914
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-025914 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-vgp87
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-025914 describe pod metrics-server-9975d5f86-vgp87
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-025914 describe pod metrics-server-9975d5f86-vgp87: exit status 1 (118.577297ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-vgp87" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-025914 describe pod metrics-server-9975d5f86-vgp87: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.46s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.29
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.31.1/json-events 13.42
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 219.15
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 15.63
34 TestAddons/parallel/Ingress 19.94
35 TestAddons/parallel/InspektorGadget 10.98
36 TestAddons/parallel/MetricsServer 6.84
39 TestAddons/parallel/CSI 54.51
40 TestAddons/parallel/Headlamp 16.16
41 TestAddons/parallel/CloudSpanner 6.84
42 TestAddons/parallel/LocalPath 53.29
43 TestAddons/parallel/NvidiaDevicePlugin 6.59
44 TestAddons/parallel/Yakd 11.82
45 TestAddons/StoppedEnableDisable 12.3
46 TestCertOptions 37.18
47 TestCertExpiration 230.61
49 TestForceSystemdFlag 47.78
50 TestForceSystemdEnv 43.48
51 TestDockerEnvContainerd 46.3
56 TestErrorSpam/setup 29.32
57 TestErrorSpam/start 0.76
58 TestErrorSpam/status 1.06
59 TestErrorSpam/pause 1.81
60 TestErrorSpam/unpause 1.86
61 TestErrorSpam/stop 1.45
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 86.81
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.17
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.29
73 TestFunctional/serial/CacheCmd/cache/add_local 1.28
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
75 TestFunctional/serial/CacheCmd/cache/list 0.07
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
81 TestFunctional/serial/ExtraConfig 294.43
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.77
84 TestFunctional/serial/LogsFileCmd 1.82
85 TestFunctional/serial/InvalidService 4.53
87 TestFunctional/parallel/ConfigCmd 0.45
88 TestFunctional/parallel/DashboardCmd 7.52
89 TestFunctional/parallel/DryRun 0.4
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.03
95 TestFunctional/parallel/ServiceCmdConnect 8.7
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 24.62
99 TestFunctional/parallel/SSHCmd 0.59
100 TestFunctional/parallel/CpCmd 2.01
102 TestFunctional/parallel/FileSync 0.36
103 TestFunctional/parallel/CertSync 2.19
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
111 TestFunctional/parallel/License 0.3
112 TestFunctional/parallel/Version/short 0.07
113 TestFunctional/parallel/Version/components 1.32
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.38
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.02
119 TestFunctional/parallel/ImageCommands/Setup 0.83
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.43
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.42
125 TestFunctional/parallel/ServiceCmd/DeployApp 11.28
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.45
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.33
136 TestFunctional/parallel/ServiceCmd/List 0.35
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
139 TestFunctional/parallel/ServiceCmd/Format 0.36
140 TestFunctional/parallel/ServiceCmd/URL 0.38
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
148 TestFunctional/parallel/ProfileCmd/profile_list 0.4
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
150 TestFunctional/parallel/MountCmd/any-port 7.88
151 TestFunctional/parallel/MountCmd/specific-port 2.07
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.23
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 107.42
160 TestMultiControlPlane/serial/DeployApp 33.26
161 TestMultiControlPlane/serial/PingHostFromPods 1.69
162 TestMultiControlPlane/serial/AddWorkerNode 21.41
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
165 TestMultiControlPlane/serial/CopyFile 19.6
166 TestMultiControlPlane/serial/StopSecondaryNode 12.84
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
168 TestMultiControlPlane/serial/RestartSecondaryNode 19.3
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.99
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 141.97
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.62
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
173 TestMultiControlPlane/serial/StopCluster 36.08
174 TestMultiControlPlane/serial/RestartCluster 77.62
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
176 TestMultiControlPlane/serial/AddSecondaryNode 39.99
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
181 TestJSONOutput/start/Command 53.48
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.76
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.67
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.79
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 41.72
207 TestKicCustomNetwork/use_default_bridge_network 34.3
208 TestKicExistingNetwork 33.71
209 TestKicCustomSubnet 34.1
210 TestKicStaticIP 35.46
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 66.43
215 TestMountStart/serial/StartWithMountFirst 6.37
216 TestMountStart/serial/VerifyMountFirst 0.28
217 TestMountStart/serial/StartWithMountSecond 6.43
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.63
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 7.94
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 64.99
227 TestMultiNode/serial/DeployApp2Nodes 18.94
228 TestMultiNode/serial/PingHostFrom2Pods 1.03
229 TestMultiNode/serial/AddNode 16.14
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.71
232 TestMultiNode/serial/CopyFile 10.28
233 TestMultiNode/serial/StopNode 2.45
234 TestMultiNode/serial/StartAfterStop 9.86
235 TestMultiNode/serial/RestartKeepsNodes 88.11
236 TestMultiNode/serial/DeleteNode 5.53
237 TestMultiNode/serial/StopMultiNode 24.01
238 TestMultiNode/serial/RestartMultiNode 56.05
239 TestMultiNode/serial/ValidateNameConflict 34.05
244 TestPreload 123.32
246 TestScheduledStopUnix 106.36
249 TestInsufficientStorage 10.29
250 TestRunningBinaryUpgrade 84.79
252 TestKubernetesUpgrade 374.81
253 TestMissingContainerUpgrade 183.78
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 38.16
257 TestNoKubernetes/serial/StartWithStopK8s 20.59
258 TestNoKubernetes/serial/Start 8.16
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
260 TestNoKubernetes/serial/ProfileList 0.99
261 TestNoKubernetes/serial/Stop 1.21
262 TestNoKubernetes/serial/StartNoArgs 7.37
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
264 TestStoppedBinaryUpgrade/Setup 1.05
265 TestStoppedBinaryUpgrade/Upgrade 180.52
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.32
275 TestPause/serial/Start 60.57
283 TestNetworkPlugins/group/false 3.62
287 TestPause/serial/SecondStartNoReconfiguration 7.41
288 TestPause/serial/Pause 0.76
289 TestPause/serial/VerifyStatus 0.3
290 TestPause/serial/Unpause 0.68
291 TestPause/serial/PauseAgain 0.89
292 TestPause/serial/DeletePaused 2.81
293 TestPause/serial/VerifyDeletedResources 0.57
295 TestStartStop/group/old-k8s-version/serial/FirstStart 155.02
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.87
298 TestStartStop/group/no-preload/serial/FirstStart 60.62
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.66
300 TestStartStop/group/old-k8s-version/serial/Stop 14.63
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
303 TestStartStop/group/no-preload/serial/DeployApp 8.46
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.74
305 TestStartStop/group/no-preload/serial/Stop 12.38
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 288.87
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
311 TestStartStop/group/no-preload/serial/Pause 3.11
313 TestStartStop/group/embed-certs/serial/FirstStart 96.11
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
317 TestStartStop/group/old-k8s-version/serial/Pause 3.81
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.09
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.07
323 TestStartStop/group/embed-certs/serial/DeployApp 10.38
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.97
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.52
327 TestStartStop/group/embed-certs/serial/Stop 12.75
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
329 TestStartStop/group/embed-certs/serial/SecondStart 267.62
330 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
332 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
333 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.68
335 TestStartStop/group/newest-cni/serial/FirstStart 42.69
336 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
338 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
339 TestStartStop/group/embed-certs/serial/Pause 3.86
340 TestNetworkPlugins/group/auto/Start 99.4
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.46
343 TestStartStop/group/newest-cni/serial/Stop 1.32
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
345 TestStartStop/group/newest-cni/serial/SecondStart 22.58
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
349 TestStartStop/group/newest-cni/serial/Pause 3.03
350 TestNetworkPlugins/group/kindnet/Start 51.81
351 TestNetworkPlugins/group/auto/KubeletFlags 0.31
352 TestNetworkPlugins/group/auto/NetCatPod 11.28
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
355 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
356 TestNetworkPlugins/group/auto/DNS 0.22
357 TestNetworkPlugins/group/auto/Localhost 0.16
358 TestNetworkPlugins/group/auto/HairPin 0.15
359 TestNetworkPlugins/group/kindnet/DNS 0.31
360 TestNetworkPlugins/group/kindnet/Localhost 0.24
361 TestNetworkPlugins/group/kindnet/HairPin 0.21
362 TestNetworkPlugins/group/calico/Start 72.49
363 TestNetworkPlugins/group/custom-flannel/Start 53.74
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.37
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/custom-flannel/DNS 0.18
368 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
369 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
370 TestNetworkPlugins/group/calico/KubeletFlags 0.3
371 TestNetworkPlugins/group/calico/NetCatPod 12.28
372 TestNetworkPlugins/group/calico/DNS 0.26
373 TestNetworkPlugins/group/calico/Localhost 0.21
374 TestNetworkPlugins/group/calico/HairPin 0.24
375 TestNetworkPlugins/group/enable-default-cni/Start 79.48
376 TestNetworkPlugins/group/flannel/Start 53.69
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
379 TestNetworkPlugins/group/flannel/NetCatPod 11.26
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
382 TestNetworkPlugins/group/flannel/DNS 0.19
383 TestNetworkPlugins/group/flannel/Localhost 0.16
384 TestNetworkPlugins/group/flannel/HairPin 0.16
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
388 TestNetworkPlugins/group/bridge/Start 40.62
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
390 TestNetworkPlugins/group/bridge/NetCatPod 10.27
391 TestNetworkPlugins/group/bridge/DNS 0.21
392 TestNetworkPlugins/group/bridge/Localhost 0.14
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (8.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-045343 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-045343 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.292399592s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0918 20:25:37.685841  879497 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0918 20:25:37.685925  879497 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-874114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-045343
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-045343: exit status 85 (72.953096ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-045343 | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC |          |
	|         | -p download-only-045343        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:25:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:25:29.433099  879502 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:25:29.433289  879502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:25:29.433315  879502 out.go:358] Setting ErrFile to fd 2...
	I0918 20:25:29.433334  879502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:25:29.433617  879502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	W0918 20:25:29.433783  879502 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19667-874114/.minikube/config/config.json: open /home/jenkins/minikube-integration/19667-874114/.minikube/config/config.json: no such file or directory
	I0918 20:25:29.434222  879502 out.go:352] Setting JSON to true
	I0918 20:25:29.435130  879502 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14878,"bootTime":1726676252,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0918 20:25:29.435224  879502 start.go:139] virtualization:  
	I0918 20:25:29.438893  879502 out.go:97] [download-only-045343] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0918 20:25:29.439094  879502 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19667-874114/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 20:25:29.439134  879502 notify.go:220] Checking for updates...
	I0918 20:25:29.442433  879502 out.go:169] MINIKUBE_LOCATION=19667
	I0918 20:25:29.445154  879502 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:25:29.447708  879502 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 20:25:29.450353  879502 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	I0918 20:25:29.452840  879502 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0918 20:25:29.457656  879502 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 20:25:29.457937  879502 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:25:29.480224  879502 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 20:25:29.480337  879502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:25:29.543376  879502 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-18 20:25:29.534019247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:25:29.543482  879502 docker.go:318] overlay module found
	I0918 20:25:29.546685  879502 out.go:97] Using the docker driver based on user configuration
	I0918 20:25:29.546731  879502 start.go:297] selected driver: docker
	I0918 20:25:29.546738  879502 start.go:901] validating driver "docker" against <nil>
	I0918 20:25:29.546856  879502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:25:29.595496  879502 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-18 20:25:29.585487564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:25:29.595705  879502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 20:25:29.596011  879502 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0918 20:25:29.596247  879502 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 20:25:29.599769  879502 out.go:169] Using Docker driver with root privileges
	I0918 20:25:29.602093  879502 cni.go:84] Creating CNI manager for ""
	I0918 20:25:29.602168  879502 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0918 20:25:29.602181  879502 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 20:25:29.602266  879502 start.go:340] cluster config:
	{Name:download-only-045343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-045343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:25:29.604550  879502 out.go:97] Starting "download-only-045343" primary control-plane node in "download-only-045343" cluster
	I0918 20:25:29.604586  879502 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0918 20:25:29.607498  879502 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0918 20:25:29.607523  879502 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0918 20:25:29.607580  879502 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0918 20:25:29.623003  879502 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 20:25:29.623610  879502 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 20:25:29.623721  879502 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 20:25:29.672506  879502 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0918 20:25:29.672548  879502 cache.go:56] Caching tarball of preloaded images
	I0918 20:25:29.672730  879502 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0918 20:25:29.675530  879502 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0918 20:25:29.675561  879502 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0918 20:25:29.761809  879502 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19667-874114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-045343 host does not exist
	  To start a cluster, run: "minikube start -p download-only-045343"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-045343
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (13.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-879156 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-879156 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.415978561s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (13.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0918 20:25:51.527227  879497 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0918 20:25:51.527272  879497 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-874114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-879156
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-879156: exit status 85 (72.136285ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-045343 | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC |                     |
	|         | -p download-only-045343        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC | 18 Sep 24 20:25 UTC |
	| delete  | -p download-only-045343        | download-only-045343 | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC | 18 Sep 24 20:25 UTC |
	| start   | -o=json --download-only        | download-only-879156 | jenkins | v1.34.0 | 18 Sep 24 20:25 UTC |                     |
	|         | -p download-only-879156        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:25:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:25:38.152958  879697 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:25:38.153147  879697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:25:38.153178  879697 out.go:358] Setting ErrFile to fd 2...
	I0918 20:25:38.153202  879697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:25:38.153474  879697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	I0918 20:25:38.153937  879697 out.go:352] Setting JSON to true
	I0918 20:25:38.154843  879697 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14887,"bootTime":1726676252,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0918 20:25:38.154945  879697 start.go:139] virtualization:  
	I0918 20:25:38.158119  879697 out.go:97] [download-only-879156] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0918 20:25:38.158384  879697 notify.go:220] Checking for updates...
	I0918 20:25:38.160652  879697 out.go:169] MINIKUBE_LOCATION=19667
	I0918 20:25:38.163371  879697 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:25:38.165901  879697 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 20:25:38.168745  879697 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	I0918 20:25:38.170892  879697 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0918 20:25:38.175365  879697 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 20:25:38.175641  879697 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:25:38.215187  879697 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 20:25:38.215316  879697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:25:38.273665  879697 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-18 20:25:38.264129792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:25:38.273782  879697 docker.go:318] overlay module found
	I0918 20:25:38.276262  879697 out.go:97] Using the docker driver based on user configuration
	I0918 20:25:38.276295  879697 start.go:297] selected driver: docker
	I0918 20:25:38.276302  879697 start.go:901] validating driver "docker" against <nil>
	I0918 20:25:38.276413  879697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:25:38.327465  879697 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-18 20:25:38.317912319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:25:38.327678  879697 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 20:25:38.327976  879697 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0918 20:25:38.328165  879697 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 20:25:38.330808  879697 out.go:169] Using Docker driver with root privileges
	I0918 20:25:38.332946  879697 cni.go:84] Creating CNI manager for ""
	I0918 20:25:38.333015  879697 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0918 20:25:38.333027  879697 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 20:25:38.333113  879697 start.go:340] cluster config:
	{Name:download-only-879156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-879156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:25:38.335209  879697 out.go:97] Starting "download-only-879156" primary control-plane node in "download-only-879156" cluster
	I0918 20:25:38.335233  879697 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0918 20:25:38.337619  879697 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0918 20:25:38.337647  879697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0918 20:25:38.337755  879697 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0918 20:25:38.354430  879697 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 20:25:38.354575  879697 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 20:25:38.354658  879697 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0918 20:25:38.354666  879697 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0918 20:25:38.354675  879697 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0918 20:25:38.406041  879697 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0918 20:25:38.406066  879697 cache.go:56] Caching tarball of preloaded images
	I0918 20:25:38.406245  879697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0918 20:25:38.408341  879697 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0918 20:25:38.408361  879697 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0918 20:25:38.501600  879697 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19667-874114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-879156 host does not exist
	  To start a cluster, run: "minikube start -p download-only-879156"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-879156
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I0918 20:25:52.771566  879497 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-179296 --alsologtostderr --binary-mirror http://127.0.0.1:37755 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-179296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-179296
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-287708
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-287708: exit status 85 (63.970976ms)

                                                
                                                
-- stdout --
	* Profile "addons-287708" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-287708"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-287708
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-287708: exit status 85 (69.014961ms)

                                                
                                                
-- stdout --
	* Profile "addons-287708" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-287708"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (219.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-287708 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-287708 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m39.148785998s)
--- PASS: TestAddons/Setup (219.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-287708 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-287708 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.079903ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-6vbt5" [235575f2-9f39-421f-9114-4b36aa14f2ec] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003989568s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lv8dc" [f5854149-c566-4094-b719-309fabacf2f1] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003495s
addons_test.go:342: (dbg) Run:  kubectl --context addons-287708 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-287708 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-287708 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.408464746s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 ip
2024/09/18 20:33:26 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.63s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-287708 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-287708 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-287708 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [de617727-05d5-4fd7-b88f-e2ac510588dc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [de617727-05d5-4fd7-b88f-e2ac510588dc] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.007396425s
I0918 20:34:47.779381  879497 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-287708 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-287708 addons disable ingress-dns --alsologtostderr -v=1: (1.248967807s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-287708 addons disable ingress --alsologtostderr -v=1: (7.797767442s)
--- PASS: TestAddons/parallel/Ingress (19.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jc8pn" [553062d5-dcee-4b4e-80cb-1e3db7c451c8] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00453961s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-287708
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-287708: (5.970046697s)
--- PASS: TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.694968ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-8x6cq" [5cf58f18-a262-4368-a7fc-d111916eb6d2] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003150793s
addons_test.go:417: (dbg) Run:  kubectl --context addons-287708 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0918 20:33:52.755065  879497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0918 20:33:52.760724  879497 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0918 20:33:52.760948  879497 kapi.go:107] duration metric: took 7.623669ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.642205ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-287708 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-287708 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e53dc9d6-698d-405f-9f39-28993dd1bf48] Pending
helpers_test.go:344: "task-pv-pod" [e53dc9d6-698d-405f-9f39-28993dd1bf48] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e53dc9d6-698d-405f-9f39-28993dd1bf48] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003775619s
addons_test.go:590: (dbg) Run:  kubectl --context addons-287708 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-287708 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-287708 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-287708 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-287708 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-287708 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-287708 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [aca5bff6-96b6-4abb-bdcb-570c942842f1] Pending
helpers_test.go:344: "task-pv-pod-restore" [aca5bff6-96b6-4abb-bdcb-570c942842f1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [aca5bff6-96b6-4abb-bdcb-570c942842f1] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.010143888s
addons_test.go:632: (dbg) Run:  kubectl --context addons-287708 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-287708 delete pod task-pv-pod-restore: (1.137016044s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-287708 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-287708 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-287708 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.825702878s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-287708 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-287708 --alsologtostderr -v=1: (1.326410982s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-c4x2r" [c178ab33-9f9b-4883-93c0-f4589f18c104] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-c4x2r" [c178ab33-9f9b-4883-93c0-f4589f18c104] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-c4x2r" [c178ab33-9f9b-4883-93c0-f4589f18c104] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004122662s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-287708 addons disable headlamp --alsologtostderr -v=1: (5.833353647s)
--- PASS: TestAddons/parallel/Headlamp (16.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-lllvq" [66183052-ef12-485c-adda-2f3a4763ed8e] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007535666s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-287708
--- PASS: TestAddons/parallel/CloudSpanner (6.84s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-287708 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-287708 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-287708 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [19dcb76c-880c-4c4d-a2bd-2d809ad2959c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [19dcb76c-880c-4c4d-a2bd-2d809ad2959c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [19dcb76c-880c-4c4d-a2bd-2d809ad2959c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003728475s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-287708 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 ssh "cat /opt/local-path-provisioner/pvc-d0e49ae1-03ee-42cb-9cfe-37043e71760a_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-287708 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-287708 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-287708 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.959017684s)
--- PASS: TestAddons/parallel/LocalPath (53.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wvfsm" [7bf5fc49-f47e-428c-af3f-1c6152a86830] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003554511s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-287708
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-m6mdk" [4acab548-73a7-4258-921c-1376977d961c] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004318263s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-287708 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-287708 addons disable yakd --alsologtostderr -v=1: (5.814646209s)
--- PASS: TestAddons/parallel/Yakd (11.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-287708
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-287708: (12.016916254s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-287708
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-287708
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-287708
--- PASS: TestAddons/StoppedEnableDisable (12.30s)

                                                
                                    
x
+
TestCertOptions (37.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-106250 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-106250 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.486157332s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-106250 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-106250 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-106250 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-106250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-106250
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-106250: (2.024689405s)
--- PASS: TestCertOptions (37.18s)

                                                
                                    
x
+
TestCertExpiration (230.61s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-033085 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-033085 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.91589262s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-033085 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-033085 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.396340963s)
helpers_test.go:175: Cleaning up "cert-expiration-033085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-033085
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-033085: (2.2970192s)
--- PASS: TestCertExpiration (230.61s)

                                                
                                    
x
+
TestForceSystemdFlag (47.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-648772 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-648772 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (45.287151363s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-648772 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-648772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-648772
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-648772: (2.110480645s)
--- PASS: TestForceSystemdFlag (47.78s)

                                                
                                    
x
+
TestForceSystemdEnv (43.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-242864 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-242864 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.667207708s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-242864 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-242864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-242864
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-242864: (2.415273939s)
--- PASS: TestForceSystemdEnv (43.48s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.3s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-099623 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-099623 --driver=docker  --container-runtime=containerd: (30.353273166s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-099623"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bgijaBiJFC6W/agent.898556" SSH_AGENT_PID="898557" DOCKER_HOST=ssh://docker@127.0.0.1:33885 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bgijaBiJFC6W/agent.898556" SSH_AGENT_PID="898557" DOCKER_HOST=ssh://docker@127.0.0.1:33885 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bgijaBiJFC6W/agent.898556" SSH_AGENT_PID="898557" DOCKER_HOST=ssh://docker@127.0.0.1:33885 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.207517449s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bgijaBiJFC6W/agent.898556" SSH_AGENT_PID="898557" DOCKER_HOST=ssh://docker@127.0.0.1:33885 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bgijaBiJFC6W/agent.898556" SSH_AGENT_PID="898557" DOCKER_HOST=ssh://docker@127.0.0.1:33885 docker image ls": (1.263037455s)
helpers_test.go:175: Cleaning up "dockerenv-099623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-099623
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-099623: (1.980155395s)
--- PASS: TestDockerEnvContainerd (46.30s)

                                                
                                    
x
+
TestErrorSpam/setup (29.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-711454 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-711454 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-711454 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-711454 --driver=docker  --container-runtime=containerd: (29.317486066s)
--- PASS: TestErrorSpam/setup (29.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 stop: (1.268903721s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-711454 --log_dir /tmp/nospam-711454 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19667-874114/.minikube/files/etc/test/nested/copy/879497/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-247915 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-247915 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m26.805903756s)
--- PASS: TestFunctional/serial/StartWithProxy (86.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0918 20:38:11.017852  879497 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-247915 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-247915 --alsologtostderr -v=8: (6.171340129s)
functional_test.go:663: soft start took 6.173190462s for "functional-247915" cluster.
I0918 20:38:17.189582  879497 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-247915 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-247915 cache add registry.k8s.io/pause:3.1: (1.631731259s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-247915 cache add registry.k8s.io/pause:3.3: (1.408435177s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-247915 cache add registry.k8s.io/pause:latest: (1.246463441s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-247915 /tmp/TestFunctionalserialCacheCmdcacheadd_local2234705950/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 cache add minikube-local-cache-test:functional-247915
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 cache delete minikube-local-cache-test:functional-247915
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-247915
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-247915 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.574197ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-247915 cache reload: (1.148416661s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 kubectl -- --context functional-247915 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-247915 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (294.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-247915 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0918 20:39:32.592267  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:32.598690  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:32.610067  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:32.631533  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:32.673049  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:32.754494  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:32.916064  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:33.237847  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:33.879978  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:35.161466  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:37.724419  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:42.845771  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:39:53.087105  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:40:13.568450  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:40:54.529833  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:42:16.452239  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-247915 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m54.427494626s)
functional_test.go:761: restart took 4m54.427620469s for "functional-247915" cluster.
I0918 20:43:20.201908  879497 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (294.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-247915 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-247915 logs: (1.769916026s)
--- PASS: TestFunctional/serial/LogsCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 logs --file /tmp/TestFunctionalserialLogsFileCmd4138781818/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-247915 logs --file /tmp/TestFunctionalserialLogsFileCmd4138781818/001/logs.txt: (1.819851541s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.53s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-247915 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-247915
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-247915: exit status 115 (648.480733ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32008 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-247915 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-247915 config get cpus: exit status 14 (68.421662ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-247915 config get cpus: exit status 14 (65.465764ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-247915 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-247915 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 925885: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-247915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-247915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (173.267332ms)

                                                
                                                
-- stdout --
	* [functional-247915] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:44:09.223549  925521 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:44:09.223779  925521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:44:09.223806  925521 out.go:358] Setting ErrFile to fd 2...
	I0918 20:44:09.223828  925521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:44:09.224232  925521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	I0918 20:44:09.224698  925521 out.go:352] Setting JSON to false
	I0918 20:44:09.225961  925521 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15998,"bootTime":1726676252,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0918 20:44:09.226063  925521 start.go:139] virtualization:  
	I0918 20:44:09.229036  925521 out.go:177] * [functional-247915] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0918 20:44:09.231948  925521 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:44:09.232135  925521 notify.go:220] Checking for updates...
	I0918 20:44:09.235599  925521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:44:09.237939  925521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 20:44:09.239762  925521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	I0918 20:44:09.241650  925521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 20:44:09.243532  925521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:44:09.246127  925521 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 20:44:09.246811  925521 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:44:09.271033  925521 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 20:44:09.271168  925521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:44:09.337049  925521 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-09-18 20:44:09.327368299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:44:09.337175  925521 docker.go:318] overlay module found
	I0918 20:44:09.340484  925521 out.go:177] * Using the docker driver based on existing profile
	I0918 20:44:09.342146  925521 start.go:297] selected driver: docker
	I0918 20:44:09.342170  925521 start.go:901] validating driver "docker" against &{Name:functional-247915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-247915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:44:09.342294  925521 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:44:09.344759  925521 out.go:201] 
	W0918 20:44:09.346799  925521 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0918 20:44:09.348706  925521 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-247915 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-247915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-247915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (202.535956ms)

                                                
                                                
-- stdout --
	* [functional-247915] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:44:09.637747  925639 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:44:09.637873  925639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:44:09.637884  925639 out.go:358] Setting ErrFile to fd 2...
	I0918 20:44:09.637889  925639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:44:09.638808  925639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	I0918 20:44:09.639432  925639 out.go:352] Setting JSON to false
	I0918 20:44:09.640565  925639 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15998,"bootTime":1726676252,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0918 20:44:09.640642  925639 start.go:139] virtualization:  
	I0918 20:44:09.643148  925639 out.go:177] * [functional-247915] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0918 20:44:09.645307  925639 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:44:09.645456  925639 notify.go:220] Checking for updates...
	I0918 20:44:09.648507  925639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:44:09.650127  925639 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 20:44:09.651742  925639 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	I0918 20:44:09.653992  925639 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 20:44:09.655735  925639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:44:09.657800  925639 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 20:44:09.658409  925639 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:44:09.693557  925639 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 20:44:09.693752  925639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:44:09.769451  925639 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-09-18 20:44:09.757870216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:44:09.769570  925639 docker.go:318] overlay module found
	I0918 20:44:09.771662  925639 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0918 20:44:09.773416  925639 start.go:297] selected driver: docker
	I0918 20:44:09.773437  925639 start.go:901] validating driver "docker" against &{Name:functional-247915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-247915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:44:09.773565  925639 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:44:09.776022  925639 out.go:201] 
	W0918 20:44:09.777878  925639 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0918 20:44:09.779589  925639 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-247915 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-247915 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-kwxxn" [5e5a45f8-028e-4fdc-b2de-e9677f6af8ba] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-kwxxn" [5e5a45f8-028e-4fdc-b2de-e9677f6af8ba] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003726924s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32686
functional_test.go:1675: http://192.168.49.2:32686: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-kwxxn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32686
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [56401b51-556b-4ebd-8981-f40d8454324c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003302794s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-247915 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-247915 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-247915 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-247915 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0be96f6d-a5b7-4c39-aaa8-870613cd6d1c] Pending
helpers_test.go:344: "sp-pod" [0be96f6d-a5b7-4c39-aaa8-870613cd6d1c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0be96f6d-a5b7-4c39-aaa8-870613cd6d1c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004123652s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-247915 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-247915 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-247915 delete -f testdata/storage-provisioner/pod.yaml: (1.547094746s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-247915 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [049c1fdb-a9f3-4514-95ec-029ae139c3b9] Pending
helpers_test.go:344: "sp-pod" [049c1fdb-a9f3-4514-95ec-029ae139c3b9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004375203s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-247915 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh -n functional-247915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 cp functional-247915:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2632577121/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh -n functional-247915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh -n functional-247915 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/879497/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "sudo cat /etc/test/nested/copy/879497/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/879497.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "sudo cat /etc/ssl/certs/879497.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/879497.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "sudo cat /usr/share/ca-certificates/879497.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/8794972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "sudo cat /etc/ssl/certs/8794972.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/8794972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "sudo cat /usr/share/ca-certificates/8794972.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-247915 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-247915 ssh "sudo systemctl is-active docker": exit status 1 (355.716008ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-247915 ssh "sudo systemctl is-active crio": exit status 1 (350.278017ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-247915 version -o=json --components: (1.319348549s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-247915 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-247915
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-247915
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-247915 image ls --format short --alsologtostderr:
I0918 20:44:13.012511  926193 out.go:345] Setting OutFile to fd 1 ...
I0918 20:44:13.012727  926193 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:44:13.012741  926193 out.go:358] Setting ErrFile to fd 2...
I0918 20:44:13.012785  926193 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:44:13.013086  926193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
I0918 20:44:13.013870  926193 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0918 20:44:13.014070  926193 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0918 20:44:13.014676  926193 cli_runner.go:164] Run: docker container inspect functional-247915 --format={{.State.Status}}
I0918 20:44:13.034599  926193 ssh_runner.go:195] Run: systemctl --version
I0918 20:44:13.034724  926193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247915
I0918 20:44:13.056054  926193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/functional-247915/id_rsa Username:docker}
I0918 20:44:13.153238  926193 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-247915 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| docker.io/kicbase/echo-server               | functional-247915  | sha256:ce2d2c | 2.17MB |
| docker.io/library/minikube-local-cache-test | functional-247915  | sha256:296b31 | 991B   |
| localhost/my-image                          | functional-247915  | sha256:1b648a | 831kB  |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-247915 image ls --format table --alsologtostderr:
I0918 20:44:17.557403  926603 out.go:345] Setting OutFile to fd 1 ...
I0918 20:44:17.557644  926603 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:44:17.557699  926603 out.go:358] Setting ErrFile to fd 2...
I0918 20:44:17.557722  926603 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:44:17.558363  926603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
I0918 20:44:17.560275  926603 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0918 20:44:17.560598  926603 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0918 20:44:17.561867  926603 cli_runner.go:164] Run: docker container inspect functional-247915 --format={{.State.Status}}
I0918 20:44:17.596566  926603 ssh_runner.go:195] Run: systemctl --version
I0918 20:44:17.596616  926603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247915
I0918 20:44:17.619358  926603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/functional-247915/id_rsa Username:docker}
I0918 20:44:17.724779  926603 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-247915 image ls --format json --alsologtostderr:
[{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTag
s":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:1b648af817c04e8c578a5c0a4134c6a8b3e2e2ddc56391201bc221488b8f8e82","repoDigests":[],"repoTags":["localhost/my-image:functional-247915"],"size":"830616"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-
arm:1.8"],"size":"45324675"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea8412
9deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-247915"],"size":"2173567"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f
263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:296b313eb769bfdcfcdbbb2571e414442eb8051bb75e01f5ecfa002c8da0f3da","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-247915"],"size":"991"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-247915 image ls --format json --alsologtostderr:
I0918 20:44:17.385162  926532 out.go:345] Setting OutFile to fd 1 ...
I0918 20:44:17.385328  926532 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:44:17.385341  926532 out.go:358] Setting ErrFile to fd 2...
I0918 20:44:17.385346  926532 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:44:17.385597  926532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
I0918 20:44:17.386334  926532 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0918 20:44:17.386464  926532 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0918 20:44:17.386930  926532 cli_runner.go:164] Run: docker container inspect functional-247915 --format={{.State.Status}}
I0918 20:44:17.411136  926532 ssh_runner.go:195] Run: systemctl --version
I0918 20:44:17.411190  926532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247915
I0918 20:44:17.439756  926532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/functional-247915/id_rsa Username:docker}
I0918 20:44:17.585347  926532 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-247915 image ls --format yaml --alsologtostderr:
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:296b313eb769bfdcfcdbbb2571e414442eb8051bb75e01f5ecfa002c8da0f3da
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-247915
size: "991"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-247915
size: "2173567"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-247915 image ls --format yaml --alsologtostderr:
I0918 20:44:13.270939  926231 out.go:345] Setting OutFile to fd 1 ...
I0918 20:44:13.271400  926231 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:44:13.271427  926231 out.go:358] Setting ErrFile to fd 2...
I0918 20:44:13.271559  926231 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:44:13.272101  926231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
I0918 20:44:13.273667  926231 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0918 20:44:13.273843  926231 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0918 20:44:13.274467  926231 cli_runner.go:164] Run: docker container inspect functional-247915 --format={{.State.Status}}
I0918 20:44:13.294901  926231 ssh_runner.go:195] Run: systemctl --version
I0918 20:44:13.294954  926231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247915
I0918 20:44:13.326628  926231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/functional-247915/id_rsa Username:docker}
I0918 20:44:13.425033  926231 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-247915 ssh pgrep buildkitd: exit status 1 (333.310076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image build -t localhost/my-image:functional-247915 testdata/build --alsologtostderr
2024/09/18 20:44:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-247915 image build -t localhost/my-image:functional-247915 testdata/build --alsologtostderr: (3.358985429s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-247915 image build -t localhost/my-image:functional-247915 testdata/build --alsologtostderr:
I0918 20:44:13.890862  926324 out.go:345] Setting OutFile to fd 1 ...
I0918 20:44:13.891496  926324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:44:13.891526  926324 out.go:358] Setting ErrFile to fd 2...
I0918 20:44:13.891549  926324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:44:13.891824  926324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
I0918 20:44:13.892571  926324 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0918 20:44:13.894196  926324 config.go:182] Loaded profile config "functional-247915": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0918 20:44:13.894748  926324 cli_runner.go:164] Run: docker container inspect functional-247915 --format={{.State.Status}}
I0918 20:44:13.913548  926324 ssh_runner.go:195] Run: systemctl --version
I0918 20:44:13.913601  926324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-247915
I0918 20:44:13.933299  926324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/functional-247915/id_rsa Username:docker}
I0918 20:44:14.032870  926324 build_images.go:161] Building image from path: /tmp/build.1124283260.tar
I0918 20:44:14.032949  926324 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0918 20:44:14.042870  926324 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1124283260.tar
I0918 20:44:14.046921  926324 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1124283260.tar: stat -c "%s %y" /var/lib/minikube/build/build.1124283260.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1124283260.tar': No such file or directory
I0918 20:44:14.046949  926324 ssh_runner.go:362] scp /tmp/build.1124283260.tar --> /var/lib/minikube/build/build.1124283260.tar (3072 bytes)
I0918 20:44:14.074644  926324 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1124283260
I0918 20:44:14.085729  926324 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1124283260 -xf /var/lib/minikube/build/build.1124283260.tar
I0918 20:44:14.096493  926324 containerd.go:394] Building image: /var/lib/minikube/build/build.1124283260
I0918 20:44:14.096645  926324 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1124283260 --local dockerfile=/var/lib/minikube/build/build.1124283260 --output type=image,name=localhost/my-image:functional-247915
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:3a1cfa7cf92f44336e21f27295b8b78e0dfea6e2e2335ef53c07d1803ed835f9
#8 exporting manifest sha256:3a1cfa7cf92f44336e21f27295b8b78e0dfea6e2e2335ef53c07d1803ed835f9 0.0s done
#8 exporting config sha256:1b648af817c04e8c578a5c0a4134c6a8b3e2e2ddc56391201bc221488b8f8e82 0.0s done
#8 naming to localhost/my-image:functional-247915 done
#8 DONE 0.2s
I0918 20:44:17.139535  926324 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1124283260 --local dockerfile=/var/lib/minikube/build/build.1124283260 --output type=image,name=localhost/my-image:functional-247915: (3.042841511s)
I0918 20:44:17.139606  926324 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1124283260
I0918 20:44:17.151595  926324 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1124283260.tar
I0918 20:44:17.165893  926324 build_images.go:217] Built localhost/my-image:functional-247915 from /tmp/build.1124283260.tar
I0918 20:44:17.165926  926324 build_images.go:133] succeeded building to: functional-247915
I0918 20:44:17.165932  926324 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-247915
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image load --daemon kicbase/echo-server:functional-247915 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-247915 image load --daemon kicbase/echo-server:functional-247915 --alsologtostderr: (1.151656322s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image load --daemon kicbase/echo-server:functional-247915 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-247915 image load --daemon kicbase/echo-server:functional-247915 --alsologtostderr: (1.12215807s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-247915 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-247915 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-nj5t9" [da7c6688-2b11-4ffe-b074-9a9f67019025] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-nj5t9" [da7c6688-2b11-4ffe-b074-9a9f67019025] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00446091s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-247915
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image load --daemon kicbase/echo-server:functional-247915 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image save kicbase/echo-server:functional-247915 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image rm kicbase/echo-server:functional-247915 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-247915
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 image save --daemon kicbase/echo-server:functional-247915 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-247915
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-247915 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-247915 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-247915 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 922300: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-247915 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-247915 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-247915 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [929e9716-f653-4e12-ba01-d670ec3f1f97] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [929e9716-f653-4e12-ba01-d670ec3f1f97] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003897429s
I0918 20:43:46.708769  879497 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 service list -o json
functional_test.go:1494: Took "382.49796ms" to run "out/minikube-linux-arm64 -p functional-247915 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31457
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31457
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-247915 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.14.11 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-247915 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "342.69452ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "54.120809ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "334.683086ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "55.52983ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-247915 /tmp/TestFunctionalparallelMountCmdany-port4165822765/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726692238011415044" to /tmp/TestFunctionalparallelMountCmdany-port4165822765/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726692238011415044" to /tmp/TestFunctionalparallelMountCmdany-port4165822765/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726692238011415044" to /tmp/TestFunctionalparallelMountCmdany-port4165822765/001/test-1726692238011415044
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-247915 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (322.19959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 20:43:58.335300  879497 retry.go:31] will retry after 495.656284ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 18 20:43 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 18 20:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 18 20:43 test-1726692238011415044
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh cat /mount-9p/test-1726692238011415044
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-247915 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [76346ea4-ece4-4928-9b6e-2cf9a6945580] Pending
helpers_test.go:344: "busybox-mount" [76346ea4-ece4-4928-9b6e-2cf9a6945580] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [76346ea4-ece4-4928-9b6e-2cf9a6945580] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [76346ea4-ece4-4928-9b6e-2cf9a6945580] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004996088s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-247915 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-247915 /tmp/TestFunctionalparallelMountCmdany-port4165822765/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-247915 /tmp/TestFunctionalparallelMountCmdspecific-port2461243958/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-247915 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (373.318471ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 20:44:06.256032  879497 retry.go:31] will retry after 659.514611ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-247915 /tmp/TestFunctionalparallelMountCmdspecific-port2461243958/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-247915 ssh "sudo umount -f /mount-9p": exit status 1 (270.449286ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-247915 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-247915 /tmp/TestFunctionalparallelMountCmdspecific-port2461243958/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-247915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1902471192/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-247915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1902471192/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-247915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1902471192/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-247915 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-247915 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-247915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1902471192/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-247915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1902471192/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-247915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1902471192/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-247915
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-247915
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-247915
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (107.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-885597 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0918 20:44:32.591211  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:45:00.300858  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-885597 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m46.5448305s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (107.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (33.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-885597 -- rollout status deployment/busybox: (30.413666502s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-27tbl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-bdprx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-gc5gq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-27tbl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-bdprx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-gc5gq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-27tbl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-bdprx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-gc5gq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (33.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-27tbl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-27tbl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-bdprx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-bdprx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-gc5gq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-885597 -- exec busybox-7dff88458-gc5gq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-885597 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-885597 -v=7 --alsologtostderr: (20.401824668s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr: (1.005030886s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-885597 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.030067947s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-885597 status --output json -v=7 --alsologtostderr: (1.029573162s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp testdata/cp-test.txt ha-885597:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile905721143/001/cp-test_ha-885597.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597:/home/docker/cp-test.txt ha-885597-m02:/home/docker/cp-test_ha-885597_ha-885597-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m02 "sudo cat /home/docker/cp-test_ha-885597_ha-885597-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597:/home/docker/cp-test.txt ha-885597-m03:/home/docker/cp-test_ha-885597_ha-885597-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m03 "sudo cat /home/docker/cp-test_ha-885597_ha-885597-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597:/home/docker/cp-test.txt ha-885597-m04:/home/docker/cp-test_ha-885597_ha-885597-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m04 "sudo cat /home/docker/cp-test_ha-885597_ha-885597-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp testdata/cp-test.txt ha-885597-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile905721143/001/cp-test_ha-885597-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m02:/home/docker/cp-test.txt ha-885597:/home/docker/cp-test_ha-885597-m02_ha-885597.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597 "sudo cat /home/docker/cp-test_ha-885597-m02_ha-885597.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m02:/home/docker/cp-test.txt ha-885597-m03:/home/docker/cp-test_ha-885597-m02_ha-885597-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m03 "sudo cat /home/docker/cp-test_ha-885597-m02_ha-885597-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m02:/home/docker/cp-test.txt ha-885597-m04:/home/docker/cp-test_ha-885597-m02_ha-885597-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m04 "sudo cat /home/docker/cp-test_ha-885597-m02_ha-885597-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp testdata/cp-test.txt ha-885597-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile905721143/001/cp-test_ha-885597-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m03:/home/docker/cp-test.txt ha-885597:/home/docker/cp-test_ha-885597-m03_ha-885597.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597 "sudo cat /home/docker/cp-test_ha-885597-m03_ha-885597.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m03:/home/docker/cp-test.txt ha-885597-m02:/home/docker/cp-test_ha-885597-m03_ha-885597-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m02 "sudo cat /home/docker/cp-test_ha-885597-m03_ha-885597-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m03:/home/docker/cp-test.txt ha-885597-m04:/home/docker/cp-test_ha-885597-m03_ha-885597-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m04 "sudo cat /home/docker/cp-test_ha-885597-m03_ha-885597-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp testdata/cp-test.txt ha-885597-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile905721143/001/cp-test_ha-885597-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m04:/home/docker/cp-test.txt ha-885597:/home/docker/cp-test_ha-885597-m04_ha-885597.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597 "sudo cat /home/docker/cp-test_ha-885597-m04_ha-885597.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m04:/home/docker/cp-test.txt ha-885597-m02:/home/docker/cp-test_ha-885597-m04_ha-885597-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m02 "sudo cat /home/docker/cp-test_ha-885597-m04_ha-885597-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 cp ha-885597-m04:/home/docker/cp-test.txt ha-885597-m03:/home/docker/cp-test_ha-885597-m04_ha-885597-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 ssh -n ha-885597-m03 "sudo cat /home/docker/cp-test_ha-885597-m04_ha-885597-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-885597 node stop m02 -v=7 --alsologtostderr: (12.074438389s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr: exit status 7 (762.208056ms)

                                                
                                                
-- stdout --
	ha-885597
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-885597-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-885597-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-885597-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:47:37.150604  942582 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:47:37.151121  942582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:47:37.151137  942582 out.go:358] Setting ErrFile to fd 2...
	I0918 20:47:37.151143  942582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:47:37.151489  942582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	I0918 20:47:37.151727  942582 out.go:352] Setting JSON to false
	I0918 20:47:37.151758  942582 mustload.go:65] Loading cluster: ha-885597
	I0918 20:47:37.152546  942582 config.go:182] Loaded profile config "ha-885597": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 20:47:37.152576  942582 status.go:174] checking status of ha-885597 ...
	I0918 20:47:37.153498  942582 cli_runner.go:164] Run: docker container inspect ha-885597 --format={{.State.Status}}
	I0918 20:47:37.154899  942582 notify.go:220] Checking for updates...
	I0918 20:47:37.178891  942582 status.go:364] ha-885597 host status = "Running" (err=<nil>)
	I0918 20:47:37.178922  942582 host.go:66] Checking if "ha-885597" exists ...
	I0918 20:47:37.179259  942582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-885597
	I0918 20:47:37.196662  942582 host.go:66] Checking if "ha-885597" exists ...
	I0918 20:47:37.196990  942582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:47:37.197053  942582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-885597
	I0918 20:47:37.218214  942582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/ha-885597/id_rsa Username:docker}
	I0918 20:47:37.325038  942582 ssh_runner.go:195] Run: systemctl --version
	I0918 20:47:37.329918  942582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:47:37.343244  942582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:47:37.422215  942582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-18 20:47:37.412573984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:47:37.422790  942582 kubeconfig.go:125] found "ha-885597" server: "https://192.168.49.254:8443"
	I0918 20:47:37.422819  942582 api_server.go:166] Checking apiserver status ...
	I0918 20:47:37.422860  942582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:47:37.435764  942582 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup
	I0918 20:47:37.446050  942582 api_server.go:182] apiserver freezer: "5:freezer:/docker/ce4e2d3611682a15c6a47c0dd37a2934f17afe2dca14d66a51a38f69efdeb1bb/kubepods/burstable/pod822493160c1780a4733b2fdfcb902953/709410cd1cf8c86a163e7ed444f5add2218b827485406165e670ee5ea4e1649d"
	I0918 20:47:37.446145  942582 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ce4e2d3611682a15c6a47c0dd37a2934f17afe2dca14d66a51a38f69efdeb1bb/kubepods/burstable/pod822493160c1780a4733b2fdfcb902953/709410cd1cf8c86a163e7ed444f5add2218b827485406165e670ee5ea4e1649d/freezer.state
	I0918 20:47:37.455324  942582 api_server.go:204] freezer state: "THAWED"
	I0918 20:47:37.455359  942582 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0918 20:47:37.466863  942582 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0918 20:47:37.466929  942582 status.go:456] ha-885597 apiserver status = Running (err=<nil>)
	I0918 20:47:37.466973  942582 status.go:176] ha-885597 status: &{Name:ha-885597 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:47:37.467008  942582 status.go:174] checking status of ha-885597-m02 ...
	I0918 20:47:37.467357  942582 cli_runner.go:164] Run: docker container inspect ha-885597-m02 --format={{.State.Status}}
	I0918 20:47:37.483911  942582 status.go:364] ha-885597-m02 host status = "Stopped" (err=<nil>)
	I0918 20:47:37.483931  942582 status.go:377] host is not running, skipping remaining checks
	I0918 20:47:37.483938  942582 status.go:176] ha-885597-m02 status: &{Name:ha-885597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:47:37.483959  942582 status.go:174] checking status of ha-885597-m03 ...
	I0918 20:47:37.484360  942582 cli_runner.go:164] Run: docker container inspect ha-885597-m03 --format={{.State.Status}}
	I0918 20:47:37.501020  942582 status.go:364] ha-885597-m03 host status = "Running" (err=<nil>)
	I0918 20:47:37.501043  942582 host.go:66] Checking if "ha-885597-m03" exists ...
	I0918 20:47:37.501356  942582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-885597-m03
	I0918 20:47:37.518100  942582 host.go:66] Checking if "ha-885597-m03" exists ...
	I0918 20:47:37.518434  942582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:47:37.518489  942582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-885597-m03
	I0918 20:47:37.537014  942582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/ha-885597-m03/id_rsa Username:docker}
	I0918 20:47:37.633071  942582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:47:37.645737  942582 kubeconfig.go:125] found "ha-885597" server: "https://192.168.49.254:8443"
	I0918 20:47:37.645773  942582 api_server.go:166] Checking apiserver status ...
	I0918 20:47:37.645829  942582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:47:37.657397  942582 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup
	I0918 20:47:37.667019  942582 api_server.go:182] apiserver freezer: "5:freezer:/docker/4425aa4c97d8c9944c4f9949af2eb3a779a0775f49ba1a17c85c92737cb08f55/kubepods/burstable/pod8ee553b6dc8d1626eba889cb0db0ee41/c3869b98e56f8bcc6640d2d815c105b0e03961bc241d6f0c91795d4e6f4cac03"
	I0918 20:47:37.667090  942582 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4425aa4c97d8c9944c4f9949af2eb3a779a0775f49ba1a17c85c92737cb08f55/kubepods/burstable/pod8ee553b6dc8d1626eba889cb0db0ee41/c3869b98e56f8bcc6640d2d815c105b0e03961bc241d6f0c91795d4e6f4cac03/freezer.state
	I0918 20:47:37.675765  942582 api_server.go:204] freezer state: "THAWED"
	I0918 20:47:37.675802  942582 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0918 20:47:37.683756  942582 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0918 20:47:37.683791  942582 status.go:456] ha-885597-m03 apiserver status = Running (err=<nil>)
	I0918 20:47:37.683801  942582 status.go:176] ha-885597-m03 status: &{Name:ha-885597-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:47:37.683831  942582 status.go:174] checking status of ha-885597-m04 ...
	I0918 20:47:37.684357  942582 cli_runner.go:164] Run: docker container inspect ha-885597-m04 --format={{.State.Status}}
	I0918 20:47:37.703060  942582 status.go:364] ha-885597-m04 host status = "Running" (err=<nil>)
	I0918 20:47:37.703088  942582 host.go:66] Checking if "ha-885597-m04" exists ...
	I0918 20:47:37.703390  942582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-885597-m04
	I0918 20:47:37.719620  942582 host.go:66] Checking if "ha-885597-m04" exists ...
	I0918 20:47:37.719985  942582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:47:37.720043  942582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-885597-m04
	I0918 20:47:37.744557  942582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33915 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/ha-885597-m04/id_rsa Username:docker}
	I0918 20:47:37.845574  942582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:47:37.859135  942582 status.go:176] ha-885597-m04 status: &{Name:ha-885597-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-885597 node start m02 -v=7 --alsologtostderr: (18.158120949s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr: (1.035800825s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (141.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-885597 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-885597 -v=7 --alsologtostderr
E0918 20:48:33.349729  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:48:33.356211  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:48:33.367688  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:48:33.389256  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:48:33.430686  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:48:33.512236  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:48:33.673800  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:48:33.995615  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:48:34.637763  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:48:35.919246  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-885597 -v=7 --alsologtostderr: (37.505655437s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-885597 --wait=true -v=7 --alsologtostderr
E0918 20:48:38.480701  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:48:43.602040  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:48:53.844045  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:49:14.325370  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:49:32.591434  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:49:55.287063  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-885597 --wait=true -v=7 --alsologtostderr: (1m44.32105773s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-885597
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (141.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-885597 node delete m03 -v=7 --alsologtostderr: (9.666040497s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-885597 stop -v=7 --alsologtostderr: (35.967184357s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr: exit status 7 (110.137443ms)

                                                
                                                
-- stdout --
	ha-885597
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-885597-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-885597-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:51:08.384486  957010 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:51:08.384875  957010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:51:08.384895  957010 out.go:358] Setting ErrFile to fd 2...
	I0918 20:51:08.384900  957010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:51:08.385165  957010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	I0918 20:51:08.385383  957010 out.go:352] Setting JSON to false
	I0918 20:51:08.385416  957010 mustload.go:65] Loading cluster: ha-885597
	I0918 20:51:08.385856  957010 config.go:182] Loaded profile config "ha-885597": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 20:51:08.385884  957010 status.go:174] checking status of ha-885597 ...
	I0918 20:51:08.386432  957010 cli_runner.go:164] Run: docker container inspect ha-885597 --format={{.State.Status}}
	I0918 20:51:08.387027  957010 notify.go:220] Checking for updates...
	I0918 20:51:08.403282  957010 status.go:364] ha-885597 host status = "Stopped" (err=<nil>)
	I0918 20:51:08.403309  957010 status.go:377] host is not running, skipping remaining checks
	I0918 20:51:08.403316  957010 status.go:176] ha-885597 status: &{Name:ha-885597 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:51:08.403353  957010 status.go:174] checking status of ha-885597-m02 ...
	I0918 20:51:08.403703  957010 cli_runner.go:164] Run: docker container inspect ha-885597-m02 --format={{.State.Status}}
	I0918 20:51:08.421174  957010 status.go:364] ha-885597-m02 host status = "Stopped" (err=<nil>)
	I0918 20:51:08.421201  957010 status.go:377] host is not running, skipping remaining checks
	I0918 20:51:08.421209  957010 status.go:176] ha-885597-m02 status: &{Name:ha-885597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:51:08.421230  957010 status.go:174] checking status of ha-885597-m04 ...
	I0918 20:51:08.421548  957010 cli_runner.go:164] Run: docker container inspect ha-885597-m04 --format={{.State.Status}}
	I0918 20:51:08.442982  957010 status.go:364] ha-885597-m04 host status = "Stopped" (err=<nil>)
	I0918 20:51:08.443055  957010 status.go:377] host is not running, skipping remaining checks
	I0918 20:51:08.443076  957010 status.go:176] ha-885597-m04 status: &{Name:ha-885597-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-885597 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0918 20:51:17.208788  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-885597 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.627098809s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-885597 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-885597 --control-plane -v=7 --alsologtostderr: (38.934352354s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-885597 status -v=7 --alsologtostderr: (1.054017859s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-353712 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0918 20:53:33.349591  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:01.050927  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-353712 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (53.474634271s)
--- PASS: TestJSONOutput/start/Command (53.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-353712 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-353712 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-353712 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-353712 --output=json --user=testUser: (5.7929282s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-519701 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-519701 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (73.664698ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a087a52a-b80a-4b04-b231-4c8348fd1988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-519701] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"76028f7c-1498-4b55-a689-42069129bb87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"f89b375c-dbe8-4e75-a3ca-004f1ff1ff90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1c6d58a1-a623-424f-a420-59ef956dd3f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig"}}
	{"specversion":"1.0","id":"3da1a446-ba60-449c-98e1-32e0c359e050","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube"}}
	{"specversion":"1.0","id":"9f562364-864f-4ca3-b41f-4653fa2b6125","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ca11c80e-f2d7-40c5-b447-4114d1a24ec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c1f1df96-0aab-4fc9-88a0-93c00f85940f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-519701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-519701
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-459705 --network=
E0918 20:54:32.591290  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-459705 --network=: (39.286325405s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-459705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-459705
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-459705: (2.404164817s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.72s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-991639 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-991639 --network=bridge: (32.260065209s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-991639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-991639
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-991639: (2.016274333s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.30s)

                                                
                                    
x
+
TestKicExistingNetwork (33.71s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0918 20:55:36.648268  879497 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0918 20:55:36.662437  879497 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0918 20:55:36.662505  879497 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0918 20:55:36.662524  879497 cli_runner.go:164] Run: docker network inspect existing-network
W0918 20:55:36.678746  879497 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0918 20:55:36.678777  879497 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0918 20:55:36.678791  879497 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0918 20:55:36.678899  879497 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0918 20:55:36.695963  879497 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c2ec3f5ec770 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:ba:b4:db} reservation:<nil>}
I0918 20:55:36.696387  879497 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ce2f30}
I0918 20:55:36.696411  879497 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0918 20:55:36.696462  879497 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0918 20:55:36.770723  879497 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-742008 --network=existing-network
E0918 20:55:55.662268  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-742008 --network=existing-network: (31.543981985s)
helpers_test.go:175: Cleaning up "existing-network-742008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-742008
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-742008: (2.010525429s)
I0918 20:56:10.342223  879497 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.71s)

                                                
                                    
x
+
TestKicCustomSubnet (34.1s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-695066 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-695066 --subnet=192.168.60.0/24: (31.960987526s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-695066 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-695066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-695066
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-695066: (2.112420564s)
--- PASS: TestKicCustomSubnet (34.10s)

                                                
                                    
x
+
TestKicStaticIP (35.46s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-638247 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-638247 --static-ip=192.168.200.200: (33.241174855s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-638247 ip
helpers_test.go:175: Cleaning up "static-ip-638247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-638247
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-638247: (2.077043928s)
--- PASS: TestKicStaticIP (35.46s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (66.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-485708 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-485708 --driver=docker  --container-runtime=containerd: (31.170421276s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-488300 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-488300 --driver=docker  --container-runtime=containerd: (29.65826701s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-485708
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-488300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-488300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-488300
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-488300: (2.016716543s)
helpers_test.go:175: Cleaning up "first-485708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-485708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-485708: (2.179636877s)
--- PASS: TestMinikubeProfile (66.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-946173 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-946173 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.370081331s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-946173 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-948577 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0918 20:58:33.349820  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-948577 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.429776048s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-948577 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-946173 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-946173 --alsologtostderr -v=5: (1.632536032s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-948577 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-948577
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-948577: (1.210273713s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-948577
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-948577: (6.938272508s)
--- PASS: TestMountStart/serial/RestartStopped (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-948577 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812408 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0918 20:59:32.591247  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812408 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.487221218s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-812408 -- rollout status deployment/busybox: (17.157107004s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- exec busybox-7dff88458-fbjgt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- exec busybox-7dff88458-hxchh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- exec busybox-7dff88458-fbjgt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- exec busybox-7dff88458-hxchh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- exec busybox-7dff88458-fbjgt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- exec busybox-7dff88458-hxchh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- exec busybox-7dff88458-fbjgt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- exec busybox-7dff88458-fbjgt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- exec busybox-7dff88458-hxchh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-812408 -- exec busybox-7dff88458-hxchh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-812408 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-812408 -v 3 --alsologtostderr: (15.43914072s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.14s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-812408 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp testdata/cp-test.txt multinode-812408:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp multinode-812408:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1164899749/001/cp-test_multinode-812408.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp multinode-812408:/home/docker/cp-test.txt multinode-812408-m02:/home/docker/cp-test_multinode-812408_multinode-812408-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m02 "sudo cat /home/docker/cp-test_multinode-812408_multinode-812408-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp multinode-812408:/home/docker/cp-test.txt multinode-812408-m03:/home/docker/cp-test_multinode-812408_multinode-812408-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m03 "sudo cat /home/docker/cp-test_multinode-812408_multinode-812408-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp testdata/cp-test.txt multinode-812408-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp multinode-812408-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1164899749/001/cp-test_multinode-812408-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp multinode-812408-m02:/home/docker/cp-test.txt multinode-812408:/home/docker/cp-test_multinode-812408-m02_multinode-812408.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408 "sudo cat /home/docker/cp-test_multinode-812408-m02_multinode-812408.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp multinode-812408-m02:/home/docker/cp-test.txt multinode-812408-m03:/home/docker/cp-test_multinode-812408-m02_multinode-812408-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m03 "sudo cat /home/docker/cp-test_multinode-812408-m02_multinode-812408-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp testdata/cp-test.txt multinode-812408-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp multinode-812408-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1164899749/001/cp-test_multinode-812408-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp multinode-812408-m03:/home/docker/cp-test.txt multinode-812408:/home/docker/cp-test_multinode-812408-m03_multinode-812408.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408 "sudo cat /home/docker/cp-test_multinode-812408-m03_multinode-812408.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 cp multinode-812408-m03:/home/docker/cp-test.txt multinode-812408-m02:/home/docker/cp-test_multinode-812408-m03_multinode-812408-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 ssh -n multinode-812408-m02 "sudo cat /home/docker/cp-test_multinode-812408-m03_multinode-812408-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-812408 node stop m03: (1.377265112s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812408 status: exit status 7 (533.245029ms)

                                                
                                                
-- stdout --
	multinode-812408
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-812408-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-812408-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812408 status --alsologtostderr: exit status 7 (536.174155ms)

                                                
                                                
-- stdout --
	multinode-812408
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-812408-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-812408-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 21:00:46.938684 1010498 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:00:46.938811 1010498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:46.938821 1010498 out.go:358] Setting ErrFile to fd 2...
	I0918 21:00:46.938834 1010498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:46.939189 1010498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	I0918 21:00:46.939419 1010498 out.go:352] Setting JSON to false
	I0918 21:00:46.939451 1010498 mustload.go:65] Loading cluster: multinode-812408
	I0918 21:00:46.940178 1010498 config.go:182] Loaded profile config "multinode-812408": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 21:00:46.940203 1010498 status.go:174] checking status of multinode-812408 ...
	I0918 21:00:46.940999 1010498 cli_runner.go:164] Run: docker container inspect multinode-812408 --format={{.State.Status}}
	I0918 21:00:46.941761 1010498 notify.go:220] Checking for updates...
	I0918 21:00:46.963042 1010498 status.go:364] multinode-812408 host status = "Running" (err=<nil>)
	I0918 21:00:46.963072 1010498 host.go:66] Checking if "multinode-812408" exists ...
	I0918 21:00:46.963378 1010498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-812408
	I0918 21:00:46.989661 1010498 host.go:66] Checking if "multinode-812408" exists ...
	I0918 21:00:46.990061 1010498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 21:00:46.990118 1010498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-812408
	I0918 21:00:47.017401 1010498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/multinode-812408/id_rsa Username:docker}
	I0918 21:00:47.117848 1010498 ssh_runner.go:195] Run: systemctl --version
	I0918 21:00:47.122851 1010498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:00:47.134940 1010498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 21:00:47.191050 1010498 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-18 21:00:47.181287755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 21:00:47.191751 1010498 kubeconfig.go:125] found "multinode-812408" server: "https://192.168.67.2:8443"
	I0918 21:00:47.191793 1010498 api_server.go:166] Checking apiserver status ...
	I0918 21:00:47.191847 1010498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:00:47.204153 1010498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup
	I0918 21:00:47.214135 1010498 api_server.go:182] apiserver freezer: "5:freezer:/docker/27488c22d60cfde3d0b339319a6ee6f86d43017cb24c288fc1d94a6feaf188d9/kubepods/burstable/pod2d2454c794ca875433f413c041b59f43/bb9481a28d2d31ddd8f253fb90d9b9968bfea98fcbc11f3e43a9454aa9dd682f"
	I0918 21:00:47.214270 1010498 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/27488c22d60cfde3d0b339319a6ee6f86d43017cb24c288fc1d94a6feaf188d9/kubepods/burstable/pod2d2454c794ca875433f413c041b59f43/bb9481a28d2d31ddd8f253fb90d9b9968bfea98fcbc11f3e43a9454aa9dd682f/freezer.state
	I0918 21:00:47.223380 1010498 api_server.go:204] freezer state: "THAWED"
	I0918 21:00:47.223409 1010498 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0918 21:00:47.231045 1010498 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0918 21:00:47.231081 1010498 status.go:456] multinode-812408 apiserver status = Running (err=<nil>)
	I0918 21:00:47.231094 1010498 status.go:176] multinode-812408 status: &{Name:multinode-812408 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 21:00:47.231135 1010498 status.go:174] checking status of multinode-812408-m02 ...
	I0918 21:00:47.231472 1010498 cli_runner.go:164] Run: docker container inspect multinode-812408-m02 --format={{.State.Status}}
	I0918 21:00:47.248678 1010498 status.go:364] multinode-812408-m02 host status = "Running" (err=<nil>)
	I0918 21:00:47.248705 1010498 host.go:66] Checking if "multinode-812408-m02" exists ...
	I0918 21:00:47.249020 1010498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-812408-m02
	I0918 21:00:47.265300 1010498 host.go:66] Checking if "multinode-812408-m02" exists ...
	I0918 21:00:47.265652 1010498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 21:00:47.265698 1010498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-812408-m02
	I0918 21:00:47.282448 1010498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/19667-874114/.minikube/machines/multinode-812408-m02/id_rsa Username:docker}
	I0918 21:00:47.381150 1010498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:00:47.398317 1010498 status.go:176] multinode-812408-m02 status: &{Name:multinode-812408-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0918 21:00:47.398356 1010498 status.go:174] checking status of multinode-812408-m03 ...
	I0918 21:00:47.398672 1010498 cli_runner.go:164] Run: docker container inspect multinode-812408-m03 --format={{.State.Status}}
	I0918 21:00:47.417932 1010498 status.go:364] multinode-812408-m03 host status = "Stopped" (err=<nil>)
	I0918 21:00:47.417956 1010498 status.go:377] host is not running, skipping remaining checks
	I0918 21:00:47.417963 1010498 status.go:176] multinode-812408-m03 status: &{Name:multinode-812408-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-812408 node start m03 -v=7 --alsologtostderr: (9.103166461s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-812408
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-812408
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-812408: (25.341714741s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812408 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812408 --wait=true -v=8 --alsologtostderr: (1m2.658025465s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-812408
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-812408 node delete m03: (4.794873709s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-812408 stop: (23.831878936s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812408 status: exit status 7 (91.513057ms)

                                                
                                                
-- stdout --
	multinode-812408
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-812408-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-812408 status --alsologtostderr: exit status 7 (88.954621ms)

                                                
                                                
-- stdout --
	multinode-812408
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-812408-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 21:02:54.901664 1018938 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:02:54.901841 1018938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:02:54.901850 1018938 out.go:358] Setting ErrFile to fd 2...
	I0918 21:02:54.901856 1018938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:02:54.902129 1018938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	I0918 21:02:54.902367 1018938 out.go:352] Setting JSON to false
	I0918 21:02:54.902409 1018938 mustload.go:65] Loading cluster: multinode-812408
	I0918 21:02:54.902509 1018938 notify.go:220] Checking for updates...
	I0918 21:02:54.902842 1018938 config.go:182] Loaded profile config "multinode-812408": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 21:02:54.902864 1018938 status.go:174] checking status of multinode-812408 ...
	I0918 21:02:54.903733 1018938 cli_runner.go:164] Run: docker container inspect multinode-812408 --format={{.State.Status}}
	I0918 21:02:54.920983 1018938 status.go:364] multinode-812408 host status = "Stopped" (err=<nil>)
	I0918 21:02:54.921008 1018938 status.go:377] host is not running, skipping remaining checks
	I0918 21:02:54.921015 1018938 status.go:176] multinode-812408 status: &{Name:multinode-812408 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 21:02:54.921055 1018938 status.go:174] checking status of multinode-812408-m02 ...
	I0918 21:02:54.921379 1018938 cli_runner.go:164] Run: docker container inspect multinode-812408-m02 --format={{.State.Status}}
	I0918 21:02:54.944210 1018938 status.go:364] multinode-812408-m02 host status = "Stopped" (err=<nil>)
	I0918 21:02:54.944239 1018938 status.go:377] host is not running, skipping remaining checks
	I0918 21:02:54.944249 1018938 status.go:176] multinode-812408-m02 status: &{Name:multinode-812408-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812408 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0918 21:03:33.348992  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812408 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (55.390346333s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-812408 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-812408
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812408-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-812408-m02 --driver=docker  --container-runtime=containerd: exit status 14 (82.412092ms)

                                                
                                                
-- stdout --
	* [multinode-812408-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-812408-m02' is duplicated with machine name 'multinode-812408-m02' in profile 'multinode-812408'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-812408-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-812408-m03 --driver=docker  --container-runtime=containerd: (31.620657389s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-812408
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-812408: exit status 80 (316.599568ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-812408 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-812408-m03 already exists in multinode-812408-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-812408-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-812408-m03: (1.965193411s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.05s)

                                                
                                    
x
+
TestPreload (123.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-829209 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0918 21:04:32.591434  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:04:56.412847  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-829209 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m12.939829276s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-829209 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-829209 image pull gcr.io/k8s-minikube/busybox: (1.951473545s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-829209
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-829209: (12.205030779s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-829209 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-829209 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (33.501415519s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-829209 image list
helpers_test.go:175: Cleaning up "test-preload-829209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-829209
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-829209: (2.445091677s)
--- PASS: TestPreload (123.32s)

                                                
                                    
x
+
TestScheduledStopUnix (106.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-751183 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-751183 --memory=2048 --driver=docker  --container-runtime=containerd: (30.384784824s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-751183 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-751183 -n scheduled-stop-751183
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-751183 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0918 21:07:03.226579  879497 retry.go:31] will retry after 101.179µs: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.227065  879497 retry.go:31] will retry after 220.776µs: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.228190  879497 retry.go:31] will retry after 331.861µs: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.229320  879497 retry.go:31] will retry after 181.685µs: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.230468  879497 retry.go:31] will retry after 589.479µs: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.231602  879497 retry.go:31] will retry after 771.16µs: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.232710  879497 retry.go:31] will retry after 1.454169ms: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.234975  879497 retry.go:31] will retry after 1.306016ms: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.237557  879497 retry.go:31] will retry after 1.838112ms: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.239979  879497 retry.go:31] will retry after 5.739346ms: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.246210  879497 retry.go:31] will retry after 5.126088ms: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.252567  879497 retry.go:31] will retry after 6.623007ms: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.259807  879497 retry.go:31] will retry after 12.359598ms: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.273054  879497 retry.go:31] will retry after 24.700542ms: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
I0918 21:07:03.298358  879497 retry.go:31] will retry after 41.966221ms: open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/scheduled-stop-751183/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-751183 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-751183 -n scheduled-stop-751183
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-751183
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-751183 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-751183
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-751183: exit status 7 (69.140028ms)

                                                
                                                
-- stdout --
	scheduled-stop-751183
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-751183 -n scheduled-stop-751183
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-751183 -n scheduled-stop-751183: exit status 7 (67.35259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-751183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-751183
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-751183: (4.414885681s)
--- PASS: TestScheduledStopUnix (106.36s)

                                                
                                    
x
+
TestInsufficientStorage (10.29s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-898399 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-898399 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.823513627s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cf4c84ec-54b4-4498-a5b9-28314b2d3161","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-898399] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"de8c83df-4964-46a3-bdd4-5c97510526cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"f04a8155-8e7b-43c4-92fc-b628debf9c2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"19c9144c-36d5-445f-a04f-cafbd438db3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig"}}
	{"specversion":"1.0","id":"20e61651-19d7-467a-9b9f-cad94ab24fdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube"}}
	{"specversion":"1.0","id":"4555e4f9-a7c6-4972-b589-2234576ed501","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7602c48e-7913-4260-94dc-f653eacbefd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3d81230b-612c-4cf9-99d6-5d82097348bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f5bd4f8a-bda6-49b0-9898-07df5846b444","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e18b9320-fd4f-4cee-8efa-77eb7bb4f121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"13ec725e-9200-48de-9bae-472fc16f6079","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"54dc92cd-04ba-4579-8c31-165ca0767dc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-898399\" primary control-plane node in \"insufficient-storage-898399\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"df46eb30-e9bc-4234-94ef-2c715d6c3f7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f7b2aa4-255d-41a6-9431-f6419b14a747","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3aadedb1-d2e7-4a4d-87ca-25030dad52f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-898399 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-898399 --output=json --layout=cluster: exit status 7 (286.842746ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-898399","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-898399","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 21:08:26.764850 1037601 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-898399" does not appear in /home/jenkins/minikube-integration/19667-874114/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-898399 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-898399 --output=json --layout=cluster: exit status 7 (299.203091ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-898399","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-898399","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 21:08:27.066075 1037664 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-898399" does not appear in /home/jenkins/minikube-integration/19667-874114/kubeconfig
	E0918 21:08:27.077683 1037664 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/insufficient-storage-898399/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-898399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-898399
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-898399: (1.882421848s)
--- PASS: TestInsufficientStorage (10.29s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3887408079 start -p running-upgrade-714026 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3887408079 start -p running-upgrade-714026 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (34.86194193s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-714026 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-714026 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.553851259s)
helpers_test.go:175: Cleaning up "running-upgrade-714026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-714026
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-714026: (3.305956433s)
--- PASS: TestRunningBinaryUpgrade (84.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (374.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-185847 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-185847 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m18.737805094s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-185847
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-185847: (1.365755084s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-185847 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-185847 status --format={{.Host}}: exit status 7 (260.478398ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-185847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-185847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m41.585127065s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-185847 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-185847 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-185847 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (126.991115ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-185847] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-185847
	    minikube start -p kubernetes-upgrade-185847 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1858472 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-185847 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-185847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-185847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.04244158s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-185847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-185847
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-185847: (3.498486829s)
--- PASS: TestKubernetesUpgrade (374.81s)

                                                
                                    
x
+
TestMissingContainerUpgrade (183.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3020432909 start -p missing-upgrade-479573 --memory=2200 --driver=docker  --container-runtime=containerd
E0918 21:08:33.349835  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3020432909 start -p missing-upgrade-479573 --memory=2200 --driver=docker  --container-runtime=containerd: (1m37.189288733s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-479573
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-479573: (10.273355307s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-479573
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-479573 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-479573 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m12.832206365s)
helpers_test.go:175: Cleaning up "missing-upgrade-479573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-479573
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-479573: (2.440653406s)
--- PASS: TestMissingContainerUpgrade (183.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-927119 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-927119 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (76.645349ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-927119] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-927119 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-927119 --driver=docker  --container-runtime=containerd: (37.717898584s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-927119 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-927119 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-927119 --no-kubernetes --driver=docker  --container-runtime=containerd: (18.240787289s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-927119 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-927119 status -o json: exit status 2 (361.946602ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-927119","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-927119
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-927119: (1.984099051s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-927119 --no-kubernetes --driver=docker  --container-runtime=containerd
E0918 21:09:32.591786  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-927119 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.158381354s)
--- PASS: TestNoKubernetes/serial/Start (8.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-927119 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-927119 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.09223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-927119
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-927119: (1.20967625s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-927119 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-927119 --driver=docker  --container-runtime=containerd: (7.374304539s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-927119 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-927119 "sudo systemctl is-active --quiet service kubelet": exit status 1 (317.692444ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (180.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.977669193 start -p stopped-upgrade-046092 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.977669193 start -p stopped-upgrade-046092 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.478883365s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.977669193 -p stopped-upgrade-046092 stop
E0918 21:12:35.664262  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.977669193 -p stopped-upgrade-046092 stop: (19.980600957s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-046092 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0918 21:13:33.349049  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:14:32.591482  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-046092 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m52.057390889s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (180.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-046092
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-046092: (1.320095195s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                    
x
+
TestPause/serial/Start (60.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-979291 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-979291 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m0.574208155s)
--- PASS: TestPause/serial/Start (60.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-808324 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-808324 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (183.029009ms)

                                                
                                                
-- stdout --
	* [false-808324] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 21:16:54.150899 1078072 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:16:54.151116 1078072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:16:54.151129 1078072 out.go:358] Setting ErrFile to fd 2...
	I0918 21:16:54.151135 1078072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:16:54.151430 1078072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-874114/.minikube/bin
	I0918 21:16:54.151917 1078072 out.go:352] Setting JSON to false
	I0918 21:16:54.153293 1078072 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17963,"bootTime":1726676252,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0918 21:16:54.153367 1078072 start.go:139] virtualization:  
	I0918 21:16:54.157289 1078072 out.go:177] * [false-808324] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0918 21:16:54.159211 1078072 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:16:54.159312 1078072 notify.go:220] Checking for updates...
	I0918 21:16:54.163334 1078072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:16:54.165272 1078072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-874114/kubeconfig
	I0918 21:16:54.167476 1078072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-874114/.minikube
	I0918 21:16:54.169764 1078072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 21:16:54.171989 1078072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:16:54.174756 1078072 config.go:182] Loaded profile config "pause-979291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0918 21:16:54.174849 1078072 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:16:54.200829 1078072 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 21:16:54.200982 1078072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 21:16:54.272430 1078072 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-18 21:16:54.25741989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 21:16:54.272538 1078072 docker.go:318] overlay module found
	I0918 21:16:54.274932 1078072 out.go:177] * Using the docker driver based on user configuration
	I0918 21:16:54.277052 1078072 start.go:297] selected driver: docker
	I0918 21:16:54.277075 1078072 start.go:901] validating driver "docker" against <nil>
	I0918 21:16:54.277089 1078072 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:16:54.279743 1078072 out.go:201] 
	W0918 21:16:54.281715 1078072 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0918 21:16:54.283801 1078072 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-808324 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-808324" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-808324" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19667-874114/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 18 Sep 2024 21:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-979291
contexts:
- context:
cluster: pause-979291
extensions:
- extension:
last-update: Wed, 18 Sep 2024 21:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-979291
name: pause-979291
current-context: pause-979291
kind: Config
preferences: {}
users:
- name: pause-979291
user:
client-certificate: /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/pause-979291/client.crt
client-key: /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/pause-979291/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-808324

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-808324"

                                                
                                                
----------------------- debugLogs end: false-808324 [took: 3.286340104s] --------------------------------
helpers_test.go:175: Cleaning up "false-808324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-808324
--- PASS: TestNetworkPlugins/group/false (3.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.41s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-979291 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-979291 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.397848564s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.41s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-979291 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-979291 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-979291 --output=json --layout=cluster: exit status 2 (302.849505ms)

                                                
                                                
-- stdout --
	{"Name":"pause-979291","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-979291","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-979291 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-979291 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-979291 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-979291 --alsologtostderr -v=5: (2.807218811s)
--- PASS: TestPause/serial/DeletePaused (2.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-979291
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-979291: exit status 1 (25.204125ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-979291: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (155.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-025914 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0918 21:18:33.349385  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:19:32.590903  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-025914 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m35.024195049s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (155.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-025914 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d7eac0a6-275f-4a42-afa1-1bbffb717220] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d7eac0a6-275f-4a42-afa1-1bbffb717220] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003618001s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-025914 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-460226 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-460226 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m0.617698566s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-025914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-025914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.40670111s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-025914 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-025914 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-025914 --alsologtostderr -v=3: (14.63347076s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-025914 -n old-k8s-version-025914
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-025914 -n old-k8s-version-025914: exit status 7 (100.315436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-025914 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-460226 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7ebfbd70-6925-461a-82ef-8966eb048985] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7ebfbd70-6925-461a-82ef-8966eb048985] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004542304s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-460226 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-460226 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-460226 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.57144557s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-460226 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-460226 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-460226 --alsologtostderr -v=3: (12.378521993s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-460226 -n no-preload-460226
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-460226 -n no-preload-460226: exit status 7 (67.667638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-460226 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (288.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-460226 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0918 21:23:33.349201  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:24:32.591541  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-460226 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m48.522454366s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-460226 -n no-preload-460226
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (288.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lwcrm" [6e8d2dfc-8ed8-47f2-8a5f-e9bc0fbc52cc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003689437s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lwcrm" [6e8d2dfc-8ed8-47f2-8a5f-e9bc0fbc52cc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004277957s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-460226 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-460226 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-460226 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-460226 -n no-preload-460226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-460226 -n no-preload-460226: exit status 2 (331.130999ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-460226 -n no-preload-460226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-460226 -n no-preload-460226: exit status 2 (317.422261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-460226 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-460226 -n no-preload-460226
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-460226 -n no-preload-460226
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (96.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-778578 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-778578 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m36.113903361s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (96.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dknpf" [20b176b3-24ff-4d3f-b867-00372394e642] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004683745s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dknpf" [20b176b3-24ff-4d3f-b867-00372394e642] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007255484s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-025914 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-025914 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-025914 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-025914 --alsologtostderr -v=1: (1.032718058s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-025914 -n old-k8s-version-025914
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-025914 -n old-k8s-version-025914: exit status 2 (560.430489ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-025914 -n old-k8s-version-025914
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-025914 -n old-k8s-version-025914: exit status 2 (398.765517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-025914 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-025914 -n old-k8s-version-025914
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-025914 -n old-k8s-version-025914
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-034205 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0918 21:28:33.349501  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-034205 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (55.094014248s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-034205 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2569ab93-3f24-4e9d-9623-75ad1011bc7a] Pending
helpers_test.go:344: "busybox" [2569ab93-3f24-4e9d-9623-75ad1011bc7a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2569ab93-3f24-4e9d-9623-75ad1011bc7a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004983457s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-034205 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-034205 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-034205 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-034205 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-034205 --alsologtostderr -v=3: (12.067870974s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-778578 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1cab2e02-f555-49b1-9d2f-334e7eba374e] Pending
helpers_test.go:344: "busybox" [1cab2e02-f555-49b1-9d2f-334e7eba374e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0918 21:29:15.665682  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [1cab2e02-f555-49b1-9d2f-334e7eba374e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004563211s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-778578 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-034205 -n default-k8s-diff-port-034205
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-034205 -n default-k8s-diff-port-034205: exit status 7 (74.471991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-034205 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-034205 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-034205 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.593713996s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-034205 -n default-k8s-diff-port-034205
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-778578 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-778578 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.353404236s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-778578 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-778578 --alsologtostderr -v=3
E0918 21:29:32.591785  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-778578 --alsologtostderr -v=3: (12.74827802s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-778578 -n embed-certs-778578
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-778578 -n embed-certs-778578: exit status 7 (90.874726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-778578 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-778578 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0918 21:30:58.585112  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:30:58.591608  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:30:58.602986  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:30:58.624464  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:30:58.665840  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:30:58.747412  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:30:58.908976  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:30:59.230383  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:30:59.872551  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:31:01.154209  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:31:03.716515  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:31:08.838816  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:31:19.080561  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:31:39.562600  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:08.291718  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:08.298101  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:08.309563  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:08.331008  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:08.372497  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:08.454365  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:08.616063  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:08.937875  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:09.580140  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:10.862142  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:13.423657  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:18.545033  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:20.524382  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:28.787078  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:32:49.268461  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:33:30.230946  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:33:33.349560  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:33:42.445826  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-778578 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.169593083s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-778578 -n embed-certs-778578
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wpmnx" [b2e9667a-292f-41d2-9e1a-d3448a8b2322] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004133531s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wpmnx" [b2e9667a-292f-41d2-9e1a-d3448a8b2322] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003624635s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-034205 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-034205 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-034205 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-034205 --alsologtostderr -v=1: (1.005741164s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-034205 -n default-k8s-diff-port-034205
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-034205 -n default-k8s-diff-port-034205: exit status 2 (472.981976ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-034205 -n default-k8s-diff-port-034205
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-034205 -n default-k8s-diff-port-034205: exit status 2 (440.221448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-034205 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-034205 -n default-k8s-diff-port-034205
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-034205 -n default-k8s-diff-port-034205
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-393830 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-393830 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (42.691891021s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5955c" [07fa5b3c-3629-4d00-b34e-6764f5e84c05] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004302707s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5955c" [07fa5b3c-3629-4d00-b34e-6764f5e84c05] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004077313s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-778578 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-778578 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-778578 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-778578 --alsologtostderr -v=1: (1.065612358s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-778578 -n embed-certs-778578
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-778578 -n embed-certs-778578: exit status 2 (376.500824ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-778578 -n embed-certs-778578
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-778578 -n embed-certs-778578: exit status 2 (416.027102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-778578 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-778578 -n embed-certs-778578
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-778578 -n embed-certs-778578
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (99.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0918 21:34:32.591439  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m39.399647156s)
--- PASS: TestNetworkPlugins/group/auto/Start (99.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-393830 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-393830 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.457873555s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-393830 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-393830 --alsologtostderr -v=3: (1.317425495s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-393830 -n newest-cni-393830
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-393830 -n newest-cni-393830: exit status 7 (99.461416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-393830 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-393830 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0918 21:34:52.152501  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-393830 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (22.236718713s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-393830 -n newest-cni-393830
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-393830 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-393830 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-393830 -n newest-cni-393830
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-393830 -n newest-cni-393830: exit status 2 (335.561855ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-393830 -n newest-cni-393830
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-393830 -n newest-cni-393830: exit status 2 (331.294837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-393830 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-393830 -n newest-cni-393830
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-393830 -n newest-cni-393830
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.03s)
E0918 21:40:17.025621  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0918 21:35:58.585688  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (51.807921492s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-808324 "pgrep -a kubelet"
I0918 21:36:04.483962  879497 config.go:182] Loaded profile config "auto-808324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-808324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q69tb" [99aeafb5-7317-44ba-bdb5-f824bb0b70d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-q69tb" [99aeafb5-7317-44ba-bdb5-f824bb0b70d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004213877s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nsfbj" [7866dcb1-5805-432f-916f-139c57460400] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003622501s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-808324 "pgrep -a kubelet"
I0918 21:36:15.228269  879497 config.go:182] Loaded profile config "kindnet-808324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-808324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6q9t7" [1e6d773a-72fa-4635-afe5-3caf5eaf4f9c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6q9t7" [1e6d773a-72fa-4635-afe5-3caf5eaf4f9c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.006785365s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-808324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-808324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m12.492234629s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0918 21:37:08.292027  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:37:35.994252  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/no-preload-460226/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (53.73844829s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-808324 "pgrep -a kubelet"
I0918 21:37:44.603636  879497 config.go:182] Loaded profile config "custom-flannel-808324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-808324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bzmjj" [c41bb7cc-c51b-44a8-aab1-3dd5f62274f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bzmjj" [c41bb7cc-c51b-44a8-aab1-3dd5f62274f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004291811s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nw7rb" [1443b046-609a-4b73-97d4-be9be81cd444] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005100813s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-808324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-808324 "pgrep -a kubelet"
I0918 21:37:57.952295  879497 config.go:182] Loaded profile config "calico-808324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-808324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kg6rc" [1cb2db2f-6f00-47bf-83c4-421ccaf70ee7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kg6rc" [1cb2db2f-6f00-47bf-83c4-421ccaf70ee7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004125847s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-808324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0918 21:38:33.348861  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/functional-247915/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m19.481814828s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0918 21:38:55.078340  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:38:55.085249  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:38:55.096663  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:38:55.117996  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:38:55.159571  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:38:55.241021  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:38:55.402412  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:38:55.723625  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:38:56.365104  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:38:57.647157  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:39:00.219350  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:39:05.340663  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:39:15.581937  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (53.686276797s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zbjmv" [d2a1acd9-e5db-4c7b-9360-4271d25ebd94] Running
E0918 21:39:32.591617  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/addons-287708/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:39:36.063284  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/default-k8s-diff-port-034205/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004012582s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-808324 "pgrep -a kubelet"
I0918 21:39:37.432408  879497 config.go:182] Loaded profile config "flannel-808324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-808324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7t8qq" [82b8f196-af7b-4059-8e87-fbd1ed11a7cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7t8qq" [82b8f196-af7b-4059-8e87-fbd1ed11a7cc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004097836s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-808324 "pgrep -a kubelet"
I0918 21:39:39.008475  879497 config.go:182] Loaded profile config "enable-default-cni-808324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-808324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bs6vl" [b1cc3cbd-d280-4914-9d42-3cda01c3c321] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bs6vl" [b1cc3cbd-d280-4914-9d42-3cda01c3c321] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.010709288s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-808324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-808324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-808324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (40.620376237s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-808324 "pgrep -a kubelet"
I0918 21:40:55.945636  879497 config.go:182] Loaded profile config "bridge-808324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-808324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jcztv" [92495d76-29e4-47de-bea9-7595e8f99788] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0918 21:40:58.586576  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/old-k8s-version-025914/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-jcztv" [92495d76-29e4-47de-bea9-7595e8f99788] Running
E0918 21:41:04.740005  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/auto-808324/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:41:04.746848  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/auto-808324/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:41:04.758315  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/auto-808324/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:41:04.779844  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/auto-808324/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:41:04.821259  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/auto-808324/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:41:04.902752  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/auto-808324/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:41:05.064521  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/auto-808324/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:41:05.386121  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/auto-808324/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:41:06.028268  879497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/auto-808324/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.002821362s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-808324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-808324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-222440 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-222440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-222440
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-789309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-789309
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-808324 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-808324" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-808324" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19667-874114/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 18 Sep 2024 21:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-979291
contexts:
- context:
cluster: pause-979291
extensions:
- extension:
last-update: Wed, 18 Sep 2024 21:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-979291
name: pause-979291
current-context: pause-979291
kind: Config
preferences: {}
users:
- name: pause-979291
user:
client-certificate: /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/pause-979291/client.crt
client-key: /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/pause-979291/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-808324

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-808324"

                                                
                                                
----------------------- debugLogs end: kubenet-808324 [took: 3.333887022s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-808324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-808324
--- SKIP: TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-808324 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-808324" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19667-874114/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 18 Sep 2024 21:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-979291
contexts:
- context:
cluster: pause-979291
extensions:
- extension:
last-update: Wed, 18 Sep 2024 21:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-979291
name: pause-979291
current-context: pause-979291
kind: Config
preferences: {}
users:
- name: pause-979291
user:
client-certificate: /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/pause-979291/client.crt
client-key: /home/jenkins/minikube-integration/19667-874114/.minikube/profiles/pause-979291/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-808324

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-808324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-808324"

                                                
                                                
----------------------- debugLogs end: cilium-808324 [took: 4.561368801s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-808324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-808324
--- SKIP: TestNetworkPlugins/group/cilium (4.75s)

                                                
                                    
Copied to clipboard