Test Report: Docker_Linux_containerd_arm64 19443

                    
                      8b84af123e21bffd183d137e5ca9151109c81e73:2024-08-15:35789
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.81
302 TestStartStop/group/old-k8s-version/serial/SecondStart 381.71
x
+
TestAddons/serial/Volcano (199.81s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 46.082115ms
addons_test.go:913: volcano-controller stabilized in 46.201425ms
addons_test.go:897: volcano-scheduler stabilized in 46.230553ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-xggvj" [fb5f2291-97f3-4e82-bd7a-8a237c296899] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003442445s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-bgbq6" [1e606298-9648-40e9-be9c-ecb7cdb7bfa6] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003866279s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-wvj4z" [d81328df-a965-42c9-a0a2-2a15f0f78083] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003459494s
addons_test.go:932: (dbg) Run:  kubectl --context addons-428464 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-428464 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-428464 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a15096e5-bccc-43d3-acbc-675dfadd54f4] Pending
helpers_test.go:344: "test-job-nginx-0" [a15096e5-bccc-43d3-acbc-675dfadd54f4] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-428464 -n addons-428464
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-15 00:43:16.316499844 +0000 UTC m=+431.953810253
addons_test.go:964: (dbg) Run:  kubectl --context addons-428464 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-428464 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-648a78ea-f02d-4bc6-89e0-101fb8aea336
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kw45j (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-kw45j:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-428464 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-428464 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-428464
helpers_test.go:235: (dbg) docker inspect addons-428464:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff70a503a154c637d7a11cafe796b8eb7978c70564efaa0fc698f306f68257e2",
	        "Created": "2024-08-15T00:36:48.964119962Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 593925,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T00:36:49.112426176Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2b339a1cac4376103734d3066f7ccdf0ac7377a2f8f8d5eb9e81c29f3abcec50",
	        "ResolvConfPath": "/var/lib/docker/containers/ff70a503a154c637d7a11cafe796b8eb7978c70564efaa0fc698f306f68257e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff70a503a154c637d7a11cafe796b8eb7978c70564efaa0fc698f306f68257e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff70a503a154c637d7a11cafe796b8eb7978c70564efaa0fc698f306f68257e2/hosts",
	        "LogPath": "/var/lib/docker/containers/ff70a503a154c637d7a11cafe796b8eb7978c70564efaa0fc698f306f68257e2/ff70a503a154c637d7a11cafe796b8eb7978c70564efaa0fc698f306f68257e2-json.log",
	        "Name": "/addons-428464",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-428464:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-428464",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6cb644e9831f0d29ad2fa81727f2679dae06aa8176ac39a1c5d01dba1c2488fd-init/diff:/var/lib/docker/overlay2/724d641fa67867c1f8a89bb3b136ff9997d84663650d206cbef2b533f5f97838/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cb644e9831f0d29ad2fa81727f2679dae06aa8176ac39a1c5d01dba1c2488fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cb644e9831f0d29ad2fa81727f2679dae06aa8176ac39a1c5d01dba1c2488fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cb644e9831f0d29ad2fa81727f2679dae06aa8176ac39a1c5d01dba1c2488fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-428464",
	                "Source": "/var/lib/docker/volumes/addons-428464/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-428464",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-428464",
	                "name.minikube.sigs.k8s.io": "addons-428464",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84e4aafde6a252d9c3d9143ea6d028c4fd119b5c2de6a3fad7d3af06d5c5aae8",
	            "SandboxKey": "/var/run/docker/netns/84e4aafde6a2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-428464": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6ffc072bf2f4605efb252d88f2188519766f528a2a661250264dd6b9cb3a9aaa",
	                    "EndpointID": "07db94f1159ee171af04294ecb0d3b694105e23af690f6ce6fe1fc531fecf952",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-428464",
	                        "ff70a503a154"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-428464 -n addons-428464
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-428464 logs -n 25: (1.574493452s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-636458   | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC |                     |
	|         | -p download-only-636458              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC | 15 Aug 24 00:36 UTC |
	| delete  | -p download-only-636458              | download-only-636458   | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC | 15 Aug 24 00:36 UTC |
	| start   | -o=json --download-only              | download-only-886391   | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC |                     |
	|         | -p download-only-886391              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC | 15 Aug 24 00:36 UTC |
	| delete  | -p download-only-886391              | download-only-886391   | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC | 15 Aug 24 00:36 UTC |
	| delete  | -p download-only-636458              | download-only-636458   | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC | 15 Aug 24 00:36 UTC |
	| delete  | -p download-only-886391              | download-only-886391   | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC | 15 Aug 24 00:36 UTC |
	| start   | --download-only -p                   | download-docker-742911 | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC |                     |
	|         | download-docker-742911               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-742911            | download-docker-742911 | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC | 15 Aug 24 00:36 UTC |
	| start   | --download-only -p                   | binary-mirror-730083   | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC |                     |
	|         | binary-mirror-730083                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35599               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-730083              | binary-mirror-730083   | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC | 15 Aug 24 00:36 UTC |
	| addons  | disable dashboard -p                 | addons-428464          | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC |                     |
	|         | addons-428464                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-428464          | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC |                     |
	|         | addons-428464                        |                        |         |         |                     |                     |
	| start   | -p addons-428464 --wait=true         | addons-428464          | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC | 15 Aug 24 00:39 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:36:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:36:23.954517  593428 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:36:23.954716  593428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:36:23.954746  593428 out.go:304] Setting ErrFile to fd 2...
	I0815 00:36:23.954769  593428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:36:23.955022  593428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	I0815 00:36:23.955524  593428 out.go:298] Setting JSON to false
	I0815 00:36:23.956482  593428 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15506,"bootTime":1723666678,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 00:36:23.956588  593428 start.go:139] virtualization:  
	I0815 00:36:23.958946  593428 out.go:177] * [addons-428464] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 00:36:23.961918  593428 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:36:23.962048  593428 notify.go:220] Checking for updates...
	I0815 00:36:23.965509  593428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:36:23.967131  593428 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 00:36:23.969268  593428 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	I0815 00:36:23.970832  593428 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 00:36:23.972416  593428 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:36:23.974297  593428 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:36:24.001479  593428 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:36:24.001601  593428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:36:24.062648  593428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 00:36:24.052921172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:36:24.062762  593428 docker.go:307] overlay module found
	I0815 00:36:24.065035  593428 out.go:177] * Using the docker driver based on user configuration
	I0815 00:36:24.066535  593428 start.go:297] selected driver: docker
	I0815 00:36:24.066558  593428 start.go:901] validating driver "docker" against <nil>
	I0815 00:36:24.066575  593428 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:36:24.067342  593428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:36:24.124070  593428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 00:36:24.114442753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:36:24.124256  593428 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:36:24.124534  593428 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:36:24.126825  593428 out.go:177] * Using Docker driver with root privileges
	I0815 00:36:24.128700  593428 cni.go:84] Creating CNI manager for ""
	I0815 00:36:24.128719  593428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 00:36:24.128732  593428 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:36:24.128808  593428 start.go:340] cluster config:
	{Name:addons-428464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-428464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:36:24.131203  593428 out.go:177] * Starting "addons-428464" primary control-plane node in "addons-428464" cluster
	I0815 00:36:24.133143  593428 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0815 00:36:24.135251  593428 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 00:36:24.137375  593428 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 00:36:24.137434  593428 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-587265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0815 00:36:24.137460  593428 cache.go:56] Caching tarball of preloaded images
	I0815 00:36:24.137458  593428 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 00:36:24.137542  593428 preload.go:172] Found /home/jenkins/minikube-integration/19443-587265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 00:36:24.137552  593428 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0815 00:36:24.137888  593428 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/config.json ...
	I0815 00:36:24.137957  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/config.json: {Name:mkde8044813e572b3555560f35befb0ee7dc05e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:24.152238  593428 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:36:24.152351  593428 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 00:36:24.152375  593428 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 00:36:24.152381  593428 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 00:36:24.152389  593428 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 00:36:24.152394  593428 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 00:36:41.072718  593428 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 00:36:41.072753  593428 cache.go:194] Successfully downloaded all kic artifacts
	I0815 00:36:41.072791  593428 start.go:360] acquireMachinesLock for addons-428464: {Name:mkd3edfb2e57264eaa7caa16327cd0ebf778aec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:36:41.072909  593428 start.go:364] duration metric: took 95.343µs to acquireMachinesLock for "addons-428464"
	I0815 00:36:41.072938  593428 start.go:93] Provisioning new machine with config: &{Name:addons-428464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-428464 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0815 00:36:41.073017  593428 start.go:125] createHost starting for "" (driver="docker")
	I0815 00:36:41.075120  593428 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0815 00:36:41.075385  593428 start.go:159] libmachine.API.Create for "addons-428464" (driver="docker")
	I0815 00:36:41.075424  593428 client.go:168] LocalClient.Create starting
	I0815 00:36:41.075555  593428 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem
	I0815 00:36:41.499151  593428 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/cert.pem
	I0815 00:36:42.504149  593428 cli_runner.go:164] Run: docker network inspect addons-428464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0815 00:36:42.518070  593428 cli_runner.go:211] docker network inspect addons-428464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0815 00:36:42.518156  593428 network_create.go:284] running [docker network inspect addons-428464] to gather additional debugging logs...
	I0815 00:36:42.518179  593428 cli_runner.go:164] Run: docker network inspect addons-428464
	W0815 00:36:42.533288  593428 cli_runner.go:211] docker network inspect addons-428464 returned with exit code 1
	I0815 00:36:42.533320  593428 network_create.go:287] error running [docker network inspect addons-428464]: docker network inspect addons-428464: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-428464 not found
	I0815 00:36:42.533335  593428 network_create.go:289] output of [docker network inspect addons-428464]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-428464 not found
	
	** /stderr **
	I0815 00:36:42.533432  593428 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 00:36:42.548528  593428 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a5de60}
	I0815 00:36:42.548579  593428 network_create.go:124] attempt to create docker network addons-428464 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0815 00:36:42.548644  593428 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-428464 addons-428464
	I0815 00:36:42.618854  593428 network_create.go:108] docker network addons-428464 192.168.49.0/24 created
	I0815 00:36:42.618892  593428 kic.go:121] calculated static IP "192.168.49.2" for the "addons-428464" container
	I0815 00:36:42.618973  593428 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0815 00:36:42.633894  593428 cli_runner.go:164] Run: docker volume create addons-428464 --label name.minikube.sigs.k8s.io=addons-428464 --label created_by.minikube.sigs.k8s.io=true
	I0815 00:36:42.650062  593428 oci.go:103] Successfully created a docker volume addons-428464
	I0815 00:36:42.650159  593428 cli_runner.go:164] Run: docker run --rm --name addons-428464-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-428464 --entrypoint /usr/bin/test -v addons-428464:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib
	I0815 00:36:44.598890  593428 cli_runner.go:217] Completed: docker run --rm --name addons-428464-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-428464 --entrypoint /usr/bin/test -v addons-428464:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib: (1.948678058s)
	I0815 00:36:44.598924  593428 oci.go:107] Successfully prepared a docker volume addons-428464
	I0815 00:36:44.598946  593428 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 00:36:44.598965  593428 kic.go:194] Starting extracting preloaded images to volume ...
	I0815 00:36:44.599048  593428 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19443-587265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-428464:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir
	I0815 00:36:48.899179  593428 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19443-587265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-428464:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir: (4.300088582s)
	I0815 00:36:48.899215  593428 kic.go:203] duration metric: took 4.300245767s to extract preloaded images to volume ...
	W0815 00:36:48.899369  593428 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0815 00:36:48.899490  593428 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0815 00:36:48.949648  593428 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-428464 --name addons-428464 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-428464 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-428464 --network addons-428464 --ip 192.168.49.2 --volume addons-428464:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002
	I0815 00:36:49.301504  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Running}}
	I0815 00:36:49.325202  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:36:49.343471  593428 cli_runner.go:164] Run: docker exec addons-428464 stat /var/lib/dpkg/alternatives/iptables
	I0815 00:36:49.415074  593428 oci.go:144] the created container "addons-428464" has a running status.
	I0815 00:36:49.415107  593428 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa...
	I0815 00:36:50.480959  593428 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0815 00:36:50.505674  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:36:50.522624  593428 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0815 00:36:50.522648  593428 kic_runner.go:114] Args: [docker exec --privileged addons-428464 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0815 00:36:50.577055  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:36:50.592037  593428 machine.go:94] provisionDockerMachine start ...
	I0815 00:36:50.592137  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:36:50.610907  593428 main.go:141] libmachine: Using SSH client type: native
	I0815 00:36:50.611181  593428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I0815 00:36:50.611197  593428 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 00:36:50.743215  593428 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-428464
	
	I0815 00:36:50.743265  593428 ubuntu.go:169] provisioning hostname "addons-428464"
	I0815 00:36:50.751566  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:36:50.768298  593428 main.go:141] libmachine: Using SSH client type: native
	I0815 00:36:50.768582  593428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I0815 00:36:50.768602  593428 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-428464 && echo "addons-428464" | sudo tee /etc/hostname
	I0815 00:36:50.911566  593428 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-428464
	
	I0815 00:36:50.911726  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:36:50.928196  593428 main.go:141] libmachine: Using SSH client type: native
	I0815 00:36:50.928439  593428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I0815 00:36:50.928461  593428 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-428464' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-428464/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-428464' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:36:51.064346  593428 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:36:51.064373  593428 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19443-587265/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-587265/.minikube}
	I0815 00:36:51.064393  593428 ubuntu.go:177] setting up certificates
	I0815 00:36:51.064403  593428 provision.go:84] configureAuth start
	I0815 00:36:51.064467  593428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-428464
	I0815 00:36:51.081758  593428 provision.go:143] copyHostCerts
	I0815 00:36:51.081848  593428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-587265/.minikube/cert.pem (1123 bytes)
	I0815 00:36:51.081979  593428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-587265/.minikube/key.pem (1675 bytes)
	I0815 00:36:51.082039  593428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-587265/.minikube/ca.pem (1082 bytes)
	I0815 00:36:51.082091  593428 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-587265/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca-key.pem org=jenkins.addons-428464 san=[127.0.0.1 192.168.49.2 addons-428464 localhost minikube]
	I0815 00:36:51.714230  593428 provision.go:177] copyRemoteCerts
	I0815 00:36:51.714302  593428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:36:51.714345  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:36:51.730888  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:36:51.824667  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 00:36:51.849214  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 00:36:51.874058  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 00:36:51.897850  593428 provision.go:87] duration metric: took 833.431587ms to configureAuth
	I0815 00:36:51.897876  593428 ubuntu.go:193] setting minikube options for container-runtime
	I0815 00:36:51.898060  593428 config.go:182] Loaded profile config "addons-428464": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 00:36:51.898068  593428 machine.go:97] duration metric: took 1.306014343s to provisionDockerMachine
	I0815 00:36:51.898074  593428 client.go:171] duration metric: took 10.822632986s to LocalClient.Create
	I0815 00:36:51.898097  593428 start.go:167] duration metric: took 10.82271319s to libmachine.API.Create "addons-428464"
	I0815 00:36:51.898106  593428 start.go:293] postStartSetup for "addons-428464" (driver="docker")
	I0815 00:36:51.898115  593428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:36:51.898163  593428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:36:51.898215  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:36:51.914141  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:36:52.010625  593428 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:36:52.014348  593428 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 00:36:52.014388  593428 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 00:36:52.014402  593428 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 00:36:52.014409  593428 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 00:36:52.014420  593428 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-587265/.minikube/addons for local assets ...
	I0815 00:36:52.014495  593428 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-587265/.minikube/files for local assets ...
	I0815 00:36:52.014530  593428 start.go:296] duration metric: took 116.417801ms for postStartSetup
	I0815 00:36:52.014894  593428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-428464
	I0815 00:36:52.031367  593428 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/config.json ...
	I0815 00:36:52.031682  593428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:36:52.031739  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:36:52.047769  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:36:52.140681  593428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 00:36:52.145266  593428 start.go:128] duration metric: took 11.072233657s to createHost
	I0815 00:36:52.145294  593428 start.go:83] releasing machines lock for "addons-428464", held for 11.072371766s
	I0815 00:36:52.145378  593428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-428464
	I0815 00:36:52.161014  593428 ssh_runner.go:195] Run: cat /version.json
	I0815 00:36:52.161059  593428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:36:52.161068  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:36:52.161104  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:36:52.183947  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:36:52.195782  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:36:52.275152  593428 ssh_runner.go:195] Run: systemctl --version
	I0815 00:36:52.410563  593428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 00:36:52.414600  593428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0815 00:36:52.438962  593428 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0815 00:36:52.439053  593428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:36:52.467109  593428 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0815 00:36:52.467129  593428 start.go:495] detecting cgroup driver to use...
	I0815 00:36:52.467161  593428 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 00:36:52.467212  593428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 00:36:52.479635  593428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 00:36:52.491222  593428 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:36:52.491361  593428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:36:52.505582  593428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:36:52.520844  593428 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:36:52.609557  593428 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:36:52.708094  593428 docker.go:233] disabling docker service ...
	I0815 00:36:52.708166  593428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:36:52.728710  593428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:36:52.742044  593428 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:36:52.835496  593428 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:36:52.935984  593428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:36:52.947416  593428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:36:52.964473  593428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 00:36:52.974949  593428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 00:36:52.985152  593428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 00:36:52.985276  593428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 00:36:52.995270  593428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 00:36:53.006773  593428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 00:36:53.017823  593428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 00:36:53.028119  593428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:36:53.037069  593428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 00:36:53.046940  593428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 00:36:53.056871  593428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 00:36:53.066589  593428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:36:53.075198  593428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:36:53.083481  593428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:36:53.182689  593428 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 00:36:53.310978  593428 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0815 00:36:53.311077  593428 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0815 00:36:53.314766  593428 start.go:563] Will wait 60s for crictl version
	I0815 00:36:53.314866  593428 ssh_runner.go:195] Run: which crictl
	I0815 00:36:53.318507  593428 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:36:53.360303  593428 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0815 00:36:53.360446  593428 ssh_runner.go:195] Run: containerd --version
	I0815 00:36:53.383189  593428 ssh_runner.go:195] Run: containerd --version
	I0815 00:36:53.407658  593428 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0815 00:36:53.409633  593428 cli_runner.go:164] Run: docker network inspect addons-428464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 00:36:53.424038  593428 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 00:36:53.427525  593428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:36:53.437822  593428 kubeadm.go:883] updating cluster {Name:addons-428464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-428464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:36:53.437959  593428 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 00:36:53.438025  593428 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:36:53.473481  593428 containerd.go:627] all images are preloaded for containerd runtime.
	I0815 00:36:53.473506  593428 containerd.go:534] Images already preloaded, skipping extraction
	I0815 00:36:53.473567  593428 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:36:53.511375  593428 containerd.go:627] all images are preloaded for containerd runtime.
	I0815 00:36:53.511397  593428 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:36:53.511405  593428 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0815 00:36:53.511509  593428 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-428464 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-428464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:36:53.511581  593428 ssh_runner.go:195] Run: sudo crictl info
	I0815 00:36:53.550289  593428 cni.go:84] Creating CNI manager for ""
	I0815 00:36:53.550314  593428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 00:36:53.550324  593428 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:36:53.550371  593428 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-428464 NodeName:addons-428464 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:36:53.550569  593428 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-428464"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:36:53.550642  593428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:36:53.559363  593428 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:36:53.559431  593428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 00:36:53.568328  593428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0815 00:36:53.586334  593428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:36:53.604389  593428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0815 00:36:53.621768  593428 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0815 00:36:53.625132  593428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:36:53.635629  593428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:36:53.725590  593428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:36:53.741498  593428 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464 for IP: 192.168.49.2
	I0815 00:36:53.741519  593428 certs.go:194] generating shared ca certs ...
	I0815 00:36:53.741538  593428 certs.go:226] acquiring lock for ca certs: {Name:mkd44da6bd4b219dfe871c9c58d5756252de3a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:53.741670  593428 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-587265/.minikube/ca.key
	I0815 00:36:53.927641  593428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-587265/.minikube/ca.crt ...
	I0815 00:36:53.927673  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/ca.crt: {Name:mk8edfa7638ee4feb438b53edffa682081cf3b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:53.927937  593428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-587265/.minikube/ca.key ...
	I0815 00:36:53.927955  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/ca.key: {Name:mk7ea458e1f51c62f9aa49e37b9521df1d25332c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:53.928061  593428 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.key
	I0815 00:36:54.712132  593428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.crt ...
	I0815 00:36:54.712167  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.crt: {Name:mk464c18982b8617d36aaa84bb894cab48053278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:54.712370  593428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.key ...
	I0815 00:36:54.712386  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.key: {Name:mke211c59a38b473e9ea45c202f389fea6247885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:54.712469  593428 certs.go:256] generating profile certs ...
	I0815 00:36:54.712537  593428 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.key
	I0815 00:36:54.712556  593428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt with IP's: []
	I0815 00:36:54.883000  593428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt ...
	I0815 00:36:54.883031  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: {Name:mk10f8d8fd31478809af796fb298bea82daebef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:54.883228  593428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.key ...
	I0815 00:36:54.883244  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.key: {Name:mk209a9b8e7b1ce0326d479529f06bbbd5c50239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:54.883343  593428 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.key.ca0326f8
	I0815 00:36:54.883368  593428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.crt.ca0326f8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0815 00:36:55.133731  593428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.crt.ca0326f8 ...
	I0815 00:36:55.133763  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.crt.ca0326f8: {Name:mk32bfb62c0c6f6db1fca61e97a84419af3996de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:55.134856  593428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.key.ca0326f8 ...
	I0815 00:36:55.134878  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.key.ca0326f8: {Name:mk35f41bf42ec07a912533b1253bb9c1d6850463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:55.134981  593428 certs.go:381] copying /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.crt.ca0326f8 -> /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.crt
	I0815 00:36:55.135068  593428 certs.go:385] copying /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.key.ca0326f8 -> /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.key
	I0815 00:36:55.135133  593428 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/proxy-client.key
	I0815 00:36:55.135153  593428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/proxy-client.crt with IP's: []
	I0815 00:36:55.744712  593428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/proxy-client.crt ...
	I0815 00:36:55.744751  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/proxy-client.crt: {Name:mka9f6d84851009c6e8f549b749f25259871300f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:55.744989  593428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/proxy-client.key ...
	I0815 00:36:55.745005  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/proxy-client.key: {Name:mk632b02766eec3e45d578a679e803f2a5f1bccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:55.745666  593428 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 00:36:55.745710  593428 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem (1082 bytes)
	I0815 00:36:55.745741  593428 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:36:55.745771  593428 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/key.pem (1675 bytes)
	I0815 00:36:55.746360  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:36:55.771610  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 00:36:55.796106  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:36:55.820214  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:36:55.847336  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 00:36:55.871191  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:36:55.895578  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:36:55.918659  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 00:36:55.946228  593428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:36:55.970079  593428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:36:55.988570  593428 ssh_runner.go:195] Run: openssl version
	I0815 00:36:55.994420  593428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:36:56.007406  593428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:36:56.011736  593428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:36 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:36:56.011830  593428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:36:56.019882  593428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:36:56.029706  593428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:36:56.033007  593428 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:36:56.033060  593428 kubeadm.go:392] StartCluster: {Name:addons-428464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-428464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:36:56.033175  593428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0815 00:36:56.033239  593428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:36:56.070081  593428 cri.go:89] found id: ""
	I0815 00:36:56.070158  593428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 00:36:56.079003  593428 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 00:36:56.087972  593428 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0815 00:36:56.088039  593428 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 00:36:56.097047  593428 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 00:36:56.097070  593428 kubeadm.go:157] found existing configuration files:
	
	I0815 00:36:56.097142  593428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 00:36:56.106536  593428 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 00:36:56.106637  593428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 00:36:56.115017  593428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 00:36:56.123403  593428 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 00:36:56.123467  593428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 00:36:56.131721  593428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 00:36:56.141058  593428 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 00:36:56.141124  593428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 00:36:56.149747  593428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 00:36:56.158601  593428 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 00:36:56.158675  593428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 00:36:56.167007  593428 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0815 00:36:56.210913  593428 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 00:36:56.211125  593428 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 00:36:56.229829  593428 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0815 00:36:56.229929  593428 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0815 00:36:56.229980  593428 kubeadm.go:310] OS: Linux
	I0815 00:36:56.230030  593428 kubeadm.go:310] CGROUPS_CPU: enabled
	I0815 00:36:56.230092  593428 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0815 00:36:56.230151  593428 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0815 00:36:56.230210  593428 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0815 00:36:56.230262  593428 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0815 00:36:56.230320  593428 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0815 00:36:56.230378  593428 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0815 00:36:56.230438  593428 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0815 00:36:56.230489  593428 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0815 00:36:56.295708  593428 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 00:36:56.295827  593428 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 00:36:56.295949  593428 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 00:36:56.304318  593428 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 00:36:56.307231  593428 out.go:204]   - Generating certificates and keys ...
	I0815 00:36:56.307349  593428 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 00:36:56.307438  593428 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 00:36:57.501648  593428 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 00:36:58.124971  593428 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 00:36:58.646333  593428 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 00:36:58.953188  593428 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 00:36:59.607869  593428 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 00:36:59.608201  593428 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-428464 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 00:36:59.962577  593428 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 00:36:59.962924  593428 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-428464 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 00:37:00.485225  593428 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 00:37:00.757328  593428 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 00:37:01.192190  593428 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 00:37:01.192261  593428 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 00:37:01.950146  593428 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 00:37:02.554984  593428 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 00:37:03.442172  593428 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 00:37:03.717844  593428 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 00:37:04.415943  593428 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 00:37:04.416589  593428 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 00:37:04.419427  593428 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 00:37:04.421510  593428 out.go:204]   - Booting up control plane ...
	I0815 00:37:04.421607  593428 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 00:37:04.421682  593428 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 00:37:04.422217  593428 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 00:37:04.432680  593428 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 00:37:04.438403  593428 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 00:37:04.438743  593428 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 00:37:04.537262  593428 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 00:37:04.537378  593428 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 00:37:06.042182  593428 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.506830879s
	I0815 00:37:06.042567  593428 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 00:37:12.044709  593428 kubeadm.go:310] [api-check] The API server is healthy after 6.00165878s
	I0815 00:37:12.071763  593428 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 00:37:12.093990  593428 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 00:37:12.128655  593428 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 00:37:12.128847  593428 kubeadm.go:310] [mark-control-plane] Marking the node addons-428464 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 00:37:12.153420  593428 kubeadm.go:310] [bootstrap-token] Using token: epynmq.vh2o56imlk1r75jf
	I0815 00:37:12.155203  593428 out.go:204]   - Configuring RBAC rules ...
	I0815 00:37:12.155342  593428 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 00:37:12.174137  593428 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 00:37:12.187441  593428 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 00:37:12.200330  593428 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 00:37:12.205159  593428 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 00:37:12.213350  593428 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 00:37:12.451728  593428 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 00:37:12.876775  593428 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 00:37:13.455902  593428 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 00:37:13.456958  593428 kubeadm.go:310] 
	I0815 00:37:13.457032  593428 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 00:37:13.457045  593428 kubeadm.go:310] 
	I0815 00:37:13.457121  593428 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 00:37:13.457132  593428 kubeadm.go:310] 
	I0815 00:37:13.457157  593428 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 00:37:13.457218  593428 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 00:37:13.457274  593428 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 00:37:13.457283  593428 kubeadm.go:310] 
	I0815 00:37:13.457335  593428 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 00:37:13.457342  593428 kubeadm.go:310] 
	I0815 00:37:13.457388  593428 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 00:37:13.457398  593428 kubeadm.go:310] 
	I0815 00:37:13.457448  593428 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 00:37:13.457528  593428 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 00:37:13.457598  593428 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 00:37:13.457606  593428 kubeadm.go:310] 
	I0815 00:37:13.457686  593428 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 00:37:13.457768  593428 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 00:37:13.457779  593428 kubeadm.go:310] 
	I0815 00:37:13.457860  593428 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token epynmq.vh2o56imlk1r75jf \
	I0815 00:37:13.457962  593428 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1312024efd32e3e49f7705f2fa06ece2ef27fb9c4b9fa5f7f8c25eae26cd4159 \
	I0815 00:37:13.457986  593428 kubeadm.go:310] 	--control-plane 
	I0815 00:37:13.457997  593428 kubeadm.go:310] 
	I0815 00:37:13.458078  593428 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 00:37:13.458087  593428 kubeadm.go:310] 
	I0815 00:37:13.458165  593428 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token epynmq.vh2o56imlk1r75jf \
	I0815 00:37:13.458267  593428 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1312024efd32e3e49f7705f2fa06ece2ef27fb9c4b9fa5f7f8c25eae26cd4159 
	I0815 00:37:13.462414  593428 kubeadm.go:310] W0815 00:36:56.207155    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:37:13.462758  593428 kubeadm.go:310] W0815 00:36:56.208696    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:37:13.463008  593428 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0815 00:37:13.463153  593428 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 00:37:13.463190  593428 cni.go:84] Creating CNI manager for ""
	I0815 00:37:13.463203  593428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 00:37:13.466615  593428 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 00:37:13.468786  593428 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 00:37:13.472552  593428 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 00:37:13.472570  593428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 00:37:13.490025  593428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 00:37:13.777881  593428 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 00:37:13.777984  593428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:37:13.778010  593428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-428464 minikube.k8s.io/updated_at=2024_08_15T00_37_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=addons-428464 minikube.k8s.io/primary=true
	I0815 00:37:13.939420  593428 ops.go:34] apiserver oom_adj: -16
	I0815 00:37:13.939500  593428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:37:14.439972  593428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:37:14.940136  593428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:37:15.439730  593428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:37:15.939643  593428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:37:16.439809  593428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:37:16.939645  593428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:37:17.440397  593428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:37:17.537859  593428 kubeadm.go:1113] duration metric: took 3.759928512s to wait for elevateKubeSystemPrivileges
	I0815 00:37:17.537885  593428 kubeadm.go:394] duration metric: took 21.504830889s to StartCluster
	I0815 00:37:17.537903  593428 settings.go:142] acquiring lock: {Name:mkf353d296e2684cbdd29a016c10a0eb45e9f213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:37:17.538530  593428 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 00:37:17.538906  593428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/kubeconfig: {Name:mka65351b6674d2edd84b4cf38d527ec03739af8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:37:17.539104  593428 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0815 00:37:17.539214  593428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 00:37:17.539496  593428 config.go:182] Loaded profile config "addons-428464": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 00:37:17.539526  593428 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 00:37:17.539599  593428 addons.go:69] Setting yakd=true in profile "addons-428464"
	I0815 00:37:17.539621  593428 addons.go:234] Setting addon yakd=true in "addons-428464"
	I0815 00:37:17.539643  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.540161  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.540455  593428 addons.go:69] Setting inspektor-gadget=true in profile "addons-428464"
	I0815 00:37:17.540484  593428 addons.go:234] Setting addon inspektor-gadget=true in "addons-428464"
	I0815 00:37:17.540508  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.540923  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.544391  593428 addons.go:69] Setting metrics-server=true in profile "addons-428464"
	I0815 00:37:17.544485  593428 addons.go:234] Setting addon metrics-server=true in "addons-428464"
	I0815 00:37:17.544575  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.545055  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.545213  593428 addons.go:69] Setting cloud-spanner=true in profile "addons-428464"
	I0815 00:37:17.545239  593428 addons.go:234] Setting addon cloud-spanner=true in "addons-428464"
	I0815 00:37:17.545261  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.545638  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.551054  593428 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-428464"
	I0815 00:37:17.551132  593428 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-428464"
	I0815 00:37:17.551170  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.551628  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.551768  593428 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-428464"
	I0815 00:37:17.551795  593428 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-428464"
	I0815 00:37:17.551823  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.552278  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.578403  593428 addons.go:69] Setting default-storageclass=true in profile "addons-428464"
	I0815 00:37:17.578464  593428 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-428464"
	I0815 00:37:17.578786  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.579068  593428 addons.go:69] Setting registry=true in profile "addons-428464"
	I0815 00:37:17.579099  593428 addons.go:234] Setting addon registry=true in "addons-428464"
	I0815 00:37:17.579133  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.579550  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.595916  593428 addons.go:69] Setting storage-provisioner=true in profile "addons-428464"
	I0815 00:37:17.595969  593428 addons.go:234] Setting addon storage-provisioner=true in "addons-428464"
	I0815 00:37:17.596004  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.596481  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.600896  593428 addons.go:69] Setting gcp-auth=true in profile "addons-428464"
	I0815 00:37:17.600958  593428 mustload.go:65] Loading cluster: addons-428464
	I0815 00:37:17.601168  593428 config.go:182] Loaded profile config "addons-428464": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 00:37:17.601480  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.608883  593428 addons.go:69] Setting ingress=true in profile "addons-428464"
	I0815 00:37:17.608930  593428 addons.go:234] Setting addon ingress=true in "addons-428464"
	I0815 00:37:17.608981  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.609441  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.610076  593428 addons.go:69] Setting ingress-dns=true in profile "addons-428464"
	I0815 00:37:17.610111  593428 addons.go:234] Setting addon ingress-dns=true in "addons-428464"
	I0815 00:37:17.610194  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.610721  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.611906  593428 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-428464"
	I0815 00:37:17.611969  593428 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-428464"
	I0815 00:37:17.617578  593428 addons.go:69] Setting volcano=true in profile "addons-428464"
	I0815 00:37:17.617627  593428 addons.go:234] Setting addon volcano=true in "addons-428464"
	I0815 00:37:17.617662  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.618101  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.627123  593428 out.go:177] * Verifying Kubernetes components...
	I0815 00:37:17.631420  593428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:37:17.649068  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.652840  593428 addons.go:69] Setting volumesnapshots=true in profile "addons-428464"
	I0815 00:37:17.652888  593428 addons.go:234] Setting addon volumesnapshots=true in "addons-428464"
	I0815 00:37:17.652925  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.653385  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.668626  593428 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 00:37:17.687072  593428 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 00:37:17.688873  593428 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 00:37:17.688912  593428 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 00:37:17.688981  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:17.708716  593428 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 00:37:17.710838  593428 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 00:37:17.710856  593428 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 00:37:17.710921  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:17.711942  593428 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 00:37:17.733327  593428 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:37:17.737131  593428 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:37:17.740024  593428 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 00:37:17.742649  593428 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 00:37:17.743878  593428 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:37:17.743935  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 00:37:17.744032  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:17.747545  593428 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 00:37:17.749660  593428 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 00:37:17.749756  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:17.752382  593428 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 00:37:17.752715  593428 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:37:17.752731  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 00:37:17.752874  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:17.756182  593428 addons.go:234] Setting addon default-storageclass=true in "addons-428464"
	I0815 00:37:17.756283  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.756938  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.781920  593428 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 00:37:17.781944  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 00:37:17.782007  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:17.804066  593428 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 00:37:17.805569  593428 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 00:37:17.807528  593428 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 00:37:17.809165  593428 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 00:37:17.810877  593428 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 00:37:17.812695  593428 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 00:37:17.814585  593428 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 00:37:17.815574  593428 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-428464"
	I0815 00:37:17.815610  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.816041  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:17.816321  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:17.831958  593428 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 00:37:17.831989  593428 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 00:37:17.832062  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:17.846686  593428 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 00:37:17.849491  593428 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:37:17.849511  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 00:37:17.849573  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:17.876813  593428 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 00:37:17.878584  593428 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:37:17.878613  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 00:37:17.878682  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:17.897028  593428 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 00:37:17.905198  593428 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 00:37:17.906880  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:17.910584  593428 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 00:37:17.910651  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 00:37:17.910757  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:17.939513  593428 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0815 00:37:17.939838  593428 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 00:37:17.950921  593428 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0815 00:37:17.963702  593428 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0815 00:37:17.963914  593428 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 00:37:17.963962  593428 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 00:37:17.964068  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:17.966530  593428 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0815 00:37:17.966555  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0815 00:37:17.966621  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:18.003166  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:18.029398  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:18.044154  593428 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 00:37:18.044179  593428 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 00:37:18.044254  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:18.050822  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:18.073211  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:18.079510  593428 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 00:37:18.081555  593428 out.go:177]   - Using image docker.io/busybox:stable
	I0815 00:37:18.083530  593428 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:37:18.083555  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 00:37:18.083628  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:18.085954  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:18.094214  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:18.109936  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:18.110773  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:18.132583  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:18.142279  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	W0815 00:37:18.147363  593428 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 00:37:18.148166  593428 retry.go:31] will retry after 213.044513ms: ssh: handshake failed: EOF
	I0815 00:37:18.147909  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:18.167547  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	W0815 00:37:18.168755  593428 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 00:37:18.168780  593428 retry.go:31] will retry after 214.859346ms: ssh: handshake failed: EOF
	I0815 00:37:18.173680  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:18.282720  593428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 00:37:18.282830  593428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W0815 00:37:18.389922  593428 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 00:37:18.390009  593428 retry.go:31] will retry after 540.500856ms: ssh: handshake failed: EOF
	I0815 00:37:18.611896  593428 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 00:37:18.611974  593428 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 00:37:18.809680  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:37:18.864305  593428 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 00:37:18.864332  593428 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 00:37:18.875243  593428 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 00:37:18.875268  593428 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 00:37:19.011873  593428 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 00:37:19.011955  593428 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 00:37:19.023608  593428 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 00:37:19.023681  593428 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 00:37:19.030526  593428 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 00:37:19.030595  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 00:37:19.038262  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 00:37:19.110211  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:37:19.132074  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0815 00:37:19.132364  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:37:19.138921  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:37:19.146057  593428 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 00:37:19.146128  593428 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 00:37:19.166320  593428 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 00:37:19.166430  593428 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 00:37:19.235269  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 00:37:19.280850  593428 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 00:37:19.280926  593428 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 00:37:19.382621  593428 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 00:37:19.382694  593428 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 00:37:19.452033  593428 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 00:37:19.452054  593428 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 00:37:19.453757  593428 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:37:19.453772  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 00:37:19.465726  593428 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:37:19.465797  593428 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 00:37:19.472553  593428 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 00:37:19.472624  593428 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 00:37:19.502316  593428 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 00:37:19.502396  593428 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 00:37:19.598797  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:37:19.632957  593428 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 00:37:19.633030  593428 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 00:37:19.708353  593428 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 00:37:19.708427  593428 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 00:37:19.723230  593428 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 00:37:19.723304  593428 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 00:37:19.727420  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:37:19.734455  593428 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 00:37:19.734534  593428 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 00:37:19.767201  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:37:19.952520  593428 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 00:37:19.952602  593428 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 00:37:20.028766  593428 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 00:37:20.028800  593428 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 00:37:20.071218  593428 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:37:20.071245  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 00:37:20.223840  593428 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 00:37:20.223896  593428 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 00:37:20.226749  593428 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:37:20.226772  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 00:37:20.329899  593428 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 00:37:20.329924  593428 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 00:37:20.449690  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:37:20.709500  593428 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 00:37:20.709525  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 00:37:20.729728  593428 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:37:20.729755  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 00:37:20.745210  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:37:21.119461  593428 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.836600561s)
	I0815 00:37:21.120422  593428 node_ready.go:35] waiting up to 6m0s for node "addons-428464" to be "Ready" ...
	I0815 00:37:21.120642  593428 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.837890402s)
	I0815 00:37:21.120669  593428 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0815 00:37:21.125101  593428 node_ready.go:49] node "addons-428464" has status "Ready":"True"
	I0815 00:37:21.125131  593428 node_ready.go:38] duration metric: took 4.677007ms for node "addons-428464" to be "Ready" ...
	I0815 00:37:21.125141  593428 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:37:21.146983  593428 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-j58rb" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:21.178506  593428 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 00:37:21.178536  593428 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 00:37:21.340072  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:37:21.453348  593428 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 00:37:21.453374  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 00:37:21.645090  593428 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-428464" context rescaled to 1 replicas
	I0815 00:37:21.830425  593428 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 00:37:21.830452  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 00:37:22.157018  593428 pod_ready.go:97] error getting pod "coredns-6f6b679f8f-j58rb" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-j58rb" not found
	I0815 00:37:22.157049  593428 pod_ready.go:81] duration metric: took 1.010021484s for pod "coredns-6f6b679f8f-j58rb" in "kube-system" namespace to be "Ready" ...
	E0815 00:37:22.157061  593428 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-j58rb" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-j58rb" not found
	I0815 00:37:22.157069  593428 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:22.157513  593428 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:37:22.157533  593428 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 00:37:22.464077  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:37:24.172873  593428 pod_ready.go:102] pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace has status "Ready":"False"
	I0815 00:37:25.029961  593428 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 00:37:25.030123  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:25.053893  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:25.715587  593428 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 00:37:25.894765  593428 addons.go:234] Setting addon gcp-auth=true in "addons-428464"
	I0815 00:37:25.894884  593428 host.go:66] Checking if "addons-428464" exists ...
	I0815 00:37:25.895448  593428 cli_runner.go:164] Run: docker container inspect addons-428464 --format={{.State.Status}}
	I0815 00:37:25.918927  593428 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 00:37:25.918977  593428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-428464
	I0815 00:37:25.945653  593428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/addons-428464/id_rsa Username:docker}
	I0815 00:37:26.683425  593428 pod_ready.go:102] pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace has status "Ready":"False"
	I0815 00:37:26.799165  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.989437385s)
	I0815 00:37:26.799249  593428 addons.go:475] Verifying addon ingress=true in "addons-428464"
	I0815 00:37:26.799367  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.761018035s)
	I0815 00:37:26.799529  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.689246576s)
	I0815 00:37:26.799551  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.667144538s)
	I0815 00:37:26.802094  593428 out.go:177] * Verifying ingress addon...
	I0815 00:37:26.804992  593428 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0815 00:37:26.809606  593428 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 00:37:26.809627  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:27.372202  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:27.834620  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:28.314400  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:28.426109  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.29400205s)
	I0815 00:37:28.426172  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.28718292s)
	I0815 00:37:28.426199  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.190860305s)
	I0815 00:37:28.426389  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.827522699s)
	I0815 00:37:28.426520  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.699037228s)
	I0815 00:37:28.426540  593428 addons.go:475] Verifying addon metrics-server=true in "addons-428464"
	I0815 00:37:28.426570  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.659298541s)
	I0815 00:37:28.426584  593428 addons.go:475] Verifying addon registry=true in "addons-428464"
	I0815 00:37:28.426726  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.977006392s)
	I0815 00:37:28.426978  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.681721882s)
	W0815 00:37:28.427014  593428 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:37:28.427031  593428 retry.go:31] will retry after 234.036098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:37:28.427093  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.086989888s)
	I0815 00:37:28.430220  593428 out.go:177] * Verifying registry addon...
	I0815 00:37:28.439813  593428 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-428464 service yakd-dashboard -n yakd-dashboard
	
	I0815 00:37:28.442727  593428 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 00:37:28.503835  593428 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 00:37:28.503947  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0815 00:37:28.520594  593428 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0815 00:37:28.661184  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:37:28.821629  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:28.954292  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:29.172107  593428 pod_ready.go:102] pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace has status "Ready":"False"
	I0815 00:37:29.176716  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.712546263s)
	I0815 00:37:29.176802  593428 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-428464"
	I0815 00:37:29.177002  593428 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.258054565s)
	I0815 00:37:29.179215  593428 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 00:37:29.179450  593428 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:37:29.181988  593428 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 00:37:29.182761  593428 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 00:37:29.184564  593428 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 00:37:29.184614  593428 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 00:37:29.192682  593428 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 00:37:29.192764  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:29.273419  593428 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 00:37:29.273495  593428 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 00:37:29.309046  593428 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:37:29.309115  593428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 00:37:29.311690  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:29.390462  593428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:37:29.446745  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:29.689510  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:29.809469  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:29.946614  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:30.189047  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:30.317684  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:30.426584  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.765331974s)
	I0815 00:37:30.426711  593428 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.036186176s)
	I0815 00:37:30.429903  593428 addons.go:475] Verifying addon gcp-auth=true in "addons-428464"
	I0815 00:37:30.433421  593428 out.go:177] * Verifying gcp-auth addon...
	I0815 00:37:30.436242  593428 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 00:37:30.439344  593428 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 00:37:30.447126  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:30.688025  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:30.810573  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:30.947835  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:31.188140  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:31.310559  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:31.447373  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:31.664385  593428 pod_ready.go:102] pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace has status "Ready":"False"
	I0815 00:37:31.690623  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:31.810398  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:31.946721  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:32.188848  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:32.312231  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:32.447132  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:32.688384  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:32.810757  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:32.946807  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:33.191591  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:33.311707  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:33.446777  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:33.664568  593428 pod_ready.go:102] pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace has status "Ready":"False"
	I0815 00:37:33.688569  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:33.809882  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:33.946927  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:34.189455  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:34.309843  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:34.446851  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:34.688637  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:34.811205  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:34.947061  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:35.187777  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:35.319509  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:35.447169  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:35.688105  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:35.810121  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:35.947740  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:36.166860  593428 pod_ready.go:102] pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace has status "Ready":"False"
	I0815 00:37:36.188238  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:36.309479  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:36.447730  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:36.688299  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:36.810115  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:36.946721  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:37.188512  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:37.310085  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:37.446847  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:37.687832  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:37.810257  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:37.947142  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:38.187605  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:38.309626  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:38.446629  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:38.662550  593428 pod_ready.go:102] pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace has status "Ready":"False"
	I0815 00:37:38.688160  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:38.809752  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:38.948156  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:39.188322  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:39.309468  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:39.446430  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:39.687835  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:39.811683  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:39.947384  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:40.193511  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:40.310377  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:40.447087  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:40.663917  593428 pod_ready.go:102] pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace has status "Ready":"False"
	I0815 00:37:40.688292  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:40.809570  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:40.947248  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:41.188777  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:41.310183  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:41.447386  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:41.692168  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:41.810968  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:41.947657  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:42.188947  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:42.310486  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:42.447032  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:42.664293  593428 pod_ready.go:102] pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace has status "Ready":"False"
	I0815 00:37:42.689290  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:42.809346  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:42.947683  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:43.188483  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:43.310094  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:43.446881  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:43.688337  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:43.811092  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:43.947117  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:44.188420  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:44.310428  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:44.448066  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:44.687614  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:44.809946  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:44.947637  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:45.213230  593428 pod_ready.go:102] pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace has status "Ready":"False"
	I0815 00:37:45.214757  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:45.311539  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:45.473542  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:45.694328  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:45.811820  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:45.946782  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:46.190857  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:46.312211  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:46.447143  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:46.663650  593428 pod_ready.go:92] pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace has status "Ready":"True"
	I0815 00:37:46.663719  593428 pod_ready.go:81] duration metric: took 24.506641663s for pod "coredns-6f6b679f8f-sm94s" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:46.663747  593428 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-428464" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:46.669058  593428 pod_ready.go:92] pod "etcd-addons-428464" in "kube-system" namespace has status "Ready":"True"
	I0815 00:37:46.669084  593428 pod_ready.go:81] duration metric: took 5.318015ms for pod "etcd-addons-428464" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:46.669099  593428 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-428464" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:46.674114  593428 pod_ready.go:92] pod "kube-apiserver-addons-428464" in "kube-system" namespace has status "Ready":"True"
	I0815 00:37:46.674182  593428 pod_ready.go:81] duration metric: took 5.074298ms for pod "kube-apiserver-addons-428464" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:46.674209  593428 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-428464" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:46.679272  593428 pod_ready.go:92] pod "kube-controller-manager-addons-428464" in "kube-system" namespace has status "Ready":"True"
	I0815 00:37:46.679337  593428 pod_ready.go:81] duration metric: took 5.105839ms for pod "kube-controller-manager-addons-428464" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:46.679382  593428 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8tt9j" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:46.684457  593428 pod_ready.go:92] pod "kube-proxy-8tt9j" in "kube-system" namespace has status "Ready":"True"
	I0815 00:37:46.684490  593428 pod_ready.go:81] duration metric: took 5.077154ms for pod "kube-proxy-8tt9j" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:46.684502  593428 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-428464" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:46.688537  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:46.809623  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:46.957821  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:47.073687  593428 pod_ready.go:92] pod "kube-scheduler-addons-428464" in "kube-system" namespace has status "Ready":"True"
	I0815 00:37:47.073762  593428 pod_ready.go:81] duration metric: took 389.251524ms for pod "kube-scheduler-addons-428464" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:47.073789  593428 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jbhnv" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:47.191431  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:47.310590  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:47.446844  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:47.461192  593428 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jbhnv" in "kube-system" namespace has status "Ready":"True"
	I0815 00:37:47.461266  593428 pod_ready.go:81] duration metric: took 387.455576ms for pod "nvidia-device-plugin-daemonset-jbhnv" in "kube-system" namespace to be "Ready" ...
	I0815 00:37:47.461292  593428 pod_ready.go:38] duration metric: took 26.336137478s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:37:47.461337  593428 api_server.go:52] waiting for apiserver process to appear ...
	I0815 00:37:47.461419  593428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:37:47.520688  593428 api_server.go:72] duration metric: took 29.981554859s to wait for apiserver process to appear ...
	I0815 00:37:47.520756  593428 api_server.go:88] waiting for apiserver healthz status ...
	I0815 00:37:47.520792  593428 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 00:37:47.537466  593428 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0815 00:37:47.538763  593428 api_server.go:141] control plane version: v1.31.0
	I0815 00:37:47.538824  593428 api_server.go:131] duration metric: took 18.047576ms to wait for apiserver health ...
	I0815 00:37:47.538849  593428 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 00:37:47.669924  593428 system_pods.go:59] 18 kube-system pods found
	I0815 00:37:47.670014  593428 system_pods.go:61] "coredns-6f6b679f8f-sm94s" [e81e8722-b031-40dc-a6fa-6af305c6bdef] Running
	I0815 00:37:47.670039  593428 system_pods.go:61] "csi-hostpath-attacher-0" [e93af1ee-e87d-4ca2-b4ee-9e1ead919d45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 00:37:47.670079  593428 system_pods.go:61] "csi-hostpath-resizer-0" [6d7b911b-42a0-436a-a9fb-5eedbca9a5e2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 00:37:47.670106  593428 system_pods.go:61] "csi-hostpathplugin-hblh7" [eca5edde-a377-460c-9dfc-1c916a1d05a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 00:37:47.670127  593428 system_pods.go:61] "etcd-addons-428464" [99b48359-c8f6-4a0a-8f1f-819cfe35dec4] Running
	I0815 00:37:47.670146  593428 system_pods.go:61] "kindnet-nw4qk" [346769b2-3aeb-4719-9c79-034c0b420c7f] Running
	I0815 00:37:47.670165  593428 system_pods.go:61] "kube-apiserver-addons-428464" [9c82fec2-077c-49ca-b83d-0ff73921ee82] Running
	I0815 00:37:47.670197  593428 system_pods.go:61] "kube-controller-manager-addons-428464" [cdbc42d7-521d-482a-ae41-38ff75c45d0c] Running
	I0815 00:37:47.670218  593428 system_pods.go:61] "kube-ingress-dns-minikube" [f27be750-e6fe-4e98-86ff-89ac15835070] Running
	I0815 00:37:47.670242  593428 system_pods.go:61] "kube-proxy-8tt9j" [943f7d8f-b468-4a5c-a38f-90cb1348ee1d] Running
	I0815 00:37:47.670275  593428 system_pods.go:61] "kube-scheduler-addons-428464" [f66ea701-dd5a-4e5a-a77d-ba90b6204fa7] Running
	I0815 00:37:47.670302  593428 system_pods.go:61] "metrics-server-8988944d9-xtzcw" [916fbe08-9127-45ed-b5b6-a7ff268a239b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 00:37:47.670324  593428 system_pods.go:61] "nvidia-device-plugin-daemonset-jbhnv" [415a8833-749a-4d27-91fa-ddb46d8b9062] Running
	I0815 00:37:47.670351  593428 system_pods.go:61] "registry-6fb4cdfc84-lmclw" [46f21c4d-b129-4b47-92a6-2655cf7b7dcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 00:37:47.670389  593428 system_pods.go:61] "registry-proxy-sc6vh" [bf1f2246-5a9a-49e0-91af-08927e629891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 00:37:47.670420  593428 system_pods.go:61] "snapshot-controller-56fcc65765-f8vft" [73cdc871-c52b-4e15-ba2d-58d0cfd46b0c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 00:37:47.670447  593428 system_pods.go:61] "snapshot-controller-56fcc65765-hsqlc" [eb6c8a53-878c-4730-9167-c0121a7e4d72] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 00:37:47.670475  593428 system_pods.go:61] "storage-provisioner" [a627a03b-c76a-433d-b9a1-a48c56e837c0] Running
	I0815 00:37:47.670505  593428 system_pods.go:74] duration metric: took 131.633563ms to wait for pod list to return data ...
	I0815 00:37:47.670532  593428 default_sa.go:34] waiting for default service account to be created ...
	I0815 00:37:47.687917  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:47.809691  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:47.863505  593428 default_sa.go:45] found service account: "default"
	I0815 00:37:47.863541  593428 default_sa.go:55] duration metric: took 192.989808ms for default service account to be created ...
	I0815 00:37:47.863553  593428 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 00:37:47.947321  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:48.067879  593428 system_pods.go:86] 18 kube-system pods found
	I0815 00:37:48.067921  593428 system_pods.go:89] "coredns-6f6b679f8f-sm94s" [e81e8722-b031-40dc-a6fa-6af305c6bdef] Running
	I0815 00:37:48.067934  593428 system_pods.go:89] "csi-hostpath-attacher-0" [e93af1ee-e87d-4ca2-b4ee-9e1ead919d45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 00:37:48.067942  593428 system_pods.go:89] "csi-hostpath-resizer-0" [6d7b911b-42a0-436a-a9fb-5eedbca9a5e2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 00:37:48.067953  593428 system_pods.go:89] "csi-hostpathplugin-hblh7" [eca5edde-a377-460c-9dfc-1c916a1d05a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 00:37:48.067958  593428 system_pods.go:89] "etcd-addons-428464" [99b48359-c8f6-4a0a-8f1f-819cfe35dec4] Running
	I0815 00:37:48.067964  593428 system_pods.go:89] "kindnet-nw4qk" [346769b2-3aeb-4719-9c79-034c0b420c7f] Running
	I0815 00:37:48.067968  593428 system_pods.go:89] "kube-apiserver-addons-428464" [9c82fec2-077c-49ca-b83d-0ff73921ee82] Running
	I0815 00:37:48.067973  593428 system_pods.go:89] "kube-controller-manager-addons-428464" [cdbc42d7-521d-482a-ae41-38ff75c45d0c] Running
	I0815 00:37:48.067979  593428 system_pods.go:89] "kube-ingress-dns-minikube" [f27be750-e6fe-4e98-86ff-89ac15835070] Running
	I0815 00:37:48.067990  593428 system_pods.go:89] "kube-proxy-8tt9j" [943f7d8f-b468-4a5c-a38f-90cb1348ee1d] Running
	I0815 00:37:48.067995  593428 system_pods.go:89] "kube-scheduler-addons-428464" [f66ea701-dd5a-4e5a-a77d-ba90b6204fa7] Running
	I0815 00:37:48.068002  593428 system_pods.go:89] "metrics-server-8988944d9-xtzcw" [916fbe08-9127-45ed-b5b6-a7ff268a239b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 00:37:48.068012  593428 system_pods.go:89] "nvidia-device-plugin-daemonset-jbhnv" [415a8833-749a-4d27-91fa-ddb46d8b9062] Running
	I0815 00:37:48.068018  593428 system_pods.go:89] "registry-6fb4cdfc84-lmclw" [46f21c4d-b129-4b47-92a6-2655cf7b7dcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 00:37:48.068025  593428 system_pods.go:89] "registry-proxy-sc6vh" [bf1f2246-5a9a-49e0-91af-08927e629891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 00:37:48.068036  593428 system_pods.go:89] "snapshot-controller-56fcc65765-f8vft" [73cdc871-c52b-4e15-ba2d-58d0cfd46b0c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 00:37:48.068044  593428 system_pods.go:89] "snapshot-controller-56fcc65765-hsqlc" [eb6c8a53-878c-4730-9167-c0121a7e4d72] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 00:37:48.068049  593428 system_pods.go:89] "storage-provisioner" [a627a03b-c76a-433d-b9a1-a48c56e837c0] Running
	I0815 00:37:48.068061  593428 system_pods.go:126] duration metric: took 204.501946ms to wait for k8s-apps to be running ...
	I0815 00:37:48.068075  593428 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 00:37:48.068139  593428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:37:48.081520  593428 system_svc.go:56] duration metric: took 13.435471ms WaitForService to wait for kubelet
	I0815 00:37:48.081548  593428 kubeadm.go:582] duration metric: took 30.542420798s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:37:48.081569  593428 node_conditions.go:102] verifying NodePressure condition ...
	I0815 00:37:48.187762  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:48.261674  593428 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0815 00:37:48.261711  593428 node_conditions.go:123] node cpu capacity is 2
	I0815 00:37:48.261724  593428 node_conditions.go:105] duration metric: took 180.149388ms to run NodePressure ...
	I0815 00:37:48.261737  593428 start.go:241] waiting for startup goroutines ...
	I0815 00:37:48.310486  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:48.447170  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:48.687968  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:48.809627  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:48.949973  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:49.187255  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:49.309482  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:49.446200  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:49.688498  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:49.810094  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:49.946707  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:50.198100  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:50.310848  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:50.449235  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:50.689007  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:50.810056  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:50.947040  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:51.189469  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:51.311176  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:51.449110  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:51.691589  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:51.812088  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:51.946602  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:52.189042  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:52.318201  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:52.458630  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:52.687999  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:52.809983  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:52.947368  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:53.188856  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:53.310526  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:53.446458  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:53.688548  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:53.810227  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:53.947205  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:54.187539  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:54.310403  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:54.446962  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:54.687704  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:54.809290  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:54.946865  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:55.188058  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:55.309781  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:55.446727  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:55.687482  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:55.809668  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:55.946657  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:56.188197  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:56.310366  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:56.447369  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:56.688133  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:56.809477  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:56.946842  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:57.187683  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:57.309965  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:57.446915  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:57.688343  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:57.810290  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:57.946227  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:58.192941  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:58.311436  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:58.446878  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:58.687673  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:58.810330  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:58.948328  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:59.188786  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:59.309778  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:59.446464  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:37:59.688412  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:37:59.809124  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:37:59.946903  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:38:00.233657  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:00.310751  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:00.449657  593428 kapi.go:107] duration metric: took 32.006929438s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 00:38:00.687358  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:00.812489  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:01.188959  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:01.309879  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:01.688399  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:01.809721  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:02.188131  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:02.309945  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:02.688028  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:02.809368  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:03.190105  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:03.310297  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:03.687609  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:03.809424  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:04.187643  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:04.313226  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:04.688346  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:04.810038  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:05.199940  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:05.310609  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:05.694346  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:05.810299  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:06.188891  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:06.310528  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:06.687781  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:06.810971  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:07.189225  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:07.309854  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:07.688612  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:07.809894  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:08.188498  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:08.309899  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:08.688091  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:08.809409  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:09.190543  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:09.309861  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:09.687260  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:09.809409  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:10.188059  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:10.309635  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:10.688286  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:10.809925  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:11.187257  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:11.309488  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:11.691864  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:11.809701  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:12.188631  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:12.310308  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:12.688097  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:12.809936  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:13.188618  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:13.318478  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:13.688263  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:13.810134  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:14.188066  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:14.310593  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:14.688282  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:14.809439  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:15.188456  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:15.309841  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:15.687765  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:15.809623  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:16.188794  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:16.312523  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:16.688449  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:16.809409  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:17.190456  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:17.309410  593428 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:38:17.705576  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:17.810861  593428 kapi.go:107] duration metric: took 51.00586809s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 00:38:18.187663  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:18.687028  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:19.188745  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:19.688276  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:20.187002  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:20.687684  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:21.188168  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:21.688143  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:22.187656  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:22.687411  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:23.187287  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:23.687723  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:24.187969  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:24.688446  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:25.189546  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:38:25.688590  593428 kapi.go:107] duration metric: took 56.505822445s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 00:38:52.441083  593428 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 00:38:52.441105  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:52.939973  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:53.439759  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:53.940416  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:54.439729  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:54.940459  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:55.440457  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:55.940435  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:56.440489  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:56.940373  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:57.439489  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:57.940253  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:58.439648  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:58.939263  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:59.439839  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:38:59.939266  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:00.440501  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:00.940460  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:01.439989  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:01.940360  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:02.440551  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:02.940305  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:03.439761  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:03.940425  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:04.440389  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:04.939756  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:05.440552  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:05.940578  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:06.440460  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:06.941083  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:07.440243  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:07.940418  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:08.440185  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:08.939451  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:09.440335  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:09.940126  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:10.439637  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:10.944415  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:11.440040  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:11.940568  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:12.440091  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:12.939812  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:13.440265  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:13.939813  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:14.439461  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:14.940584  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:15.440375  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:15.940047  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:16.440020  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:16.940121  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:17.439916  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:17.940031  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:18.439672  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:18.941212  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:19.439831  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:19.939713  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:20.440698  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:20.940411  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:21.439586  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:21.939794  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:22.440078  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:22.940346  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:23.439767  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:23.940231  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:24.440660  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:24.941685  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:25.440250  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:25.940100  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:26.440756  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:26.940260  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:27.439348  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:27.939837  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:28.439896  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:28.939681  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:29.439173  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:29.939657  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:30.440761  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:30.939622  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:31.440300  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:31.944124  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:32.440771  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:32.940076  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:33.445689  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:33.946323  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:34.440210  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:34.941091  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:35.440810  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:35.939703  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:36.440947  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:36.940513  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:37.440338  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:37.940056  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:38.440388  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:38.940622  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:39.439217  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:39.939621  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:40.440566  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:40.940587  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:41.440525  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:41.940757  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:42.439239  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:42.939912  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:43.439447  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:43.940218  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:44.439961  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:44.939772  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:45.441982  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:45.939326  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:46.440403  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:46.939549  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:47.440441  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:47.940478  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:48.439806  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:48.940534  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:49.439969  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:49.939821  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:50.440433  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:50.939439  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:51.439722  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:51.940469  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:52.440265  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:52.940302  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:53.439031  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:53.940288  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:54.440039  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:54.939642  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:55.440649  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:55.940136  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:56.439643  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:56.940436  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:57.439269  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:57.939953  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:58.439550  593428 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:39:58.940388  593428 kapi.go:107] duration metric: took 2m28.504145822s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 00:39:58.942260  593428 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-428464 cluster.
	I0815 00:39:58.945109  593428 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 00:39:58.946786  593428 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 00:39:58.949553  593428 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, volcano, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0815 00:39:58.951377  593428 addons.go:510] duration metric: took 2m41.411843216s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin volcano ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0815 00:39:58.951428  593428 start.go:246] waiting for cluster config update ...
	I0815 00:39:58.951451  593428 start.go:255] writing updated cluster config ...
	I0815 00:39:58.951761  593428 ssh_runner.go:195] Run: rm -f paused
	I0815 00:39:59.293475  593428 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 00:39:59.295681  593428 out.go:177] * Done! kubectl is now configured to use "addons-428464" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	475f6690fb580       e2d3313f65753       2 minutes ago       Exited              gadget                                   5                   fba7000268930       gadget-qnvvp
	9e8864f5a1974       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   b116c45543403       gcp-auth-89d5ffd79-9tlrx
	abd3f3b1b7182       8b46b1cd48760       4 minutes ago       Running             admission                                0                   35f37dbea6c79       volcano-admission-77d7d48b68-bgbq6
	e8cade733ae44       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   0408f96492373       csi-hostpathplugin-hblh7
	51e72c44ea28f       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   0408f96492373       csi-hostpathplugin-hblh7
	7282a549ceac5       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   0408f96492373       csi-hostpathplugin-hblh7
	1d2100da4efef       08f6b2990811a       4 minutes ago       Running             hostpath                                 0                   0408f96492373       csi-hostpathplugin-hblh7
	d18f455206c88       0107d56dbc0be       4 minutes ago       Running             node-driver-registrar                    0                   0408f96492373       csi-hostpathplugin-hblh7
	89a2ad6245adf       24f8f979639f1       5 minutes ago       Running             controller                               0                   1276f62ac689f       ingress-nginx-controller-7559cbf597-kmcmd
	362fd4b8e0298       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   e7679e8d5a3bf       csi-hostpath-attacher-0
	30016d3e5b90a       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   5633c1121829c       csi-hostpath-resizer-0
	20f41ed81f139       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   da84fc299ec78       volcano-controllers-56675bb4d5-wvj4z
	6a7070196f6f6       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   0408f96492373       csi-hostpathplugin-hblh7
	7dfbf6974a3a4       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   fa12a52e971bd       volcano-scheduler-576bc46687-xggvj
	10525410020d0       296b5f799fcd8       5 minutes ago       Exited              patch                                    0                   58344682a3053       ingress-nginx-admission-patch-29h5c
	7603ba2cf1860       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   9f435e35515be       snapshot-controller-56fcc65765-hsqlc
	abeffa190326a       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   043a7e69f4885       local-path-provisioner-86d989889c-ggccd
	97d5e00ed1b79       6fed88f43b276       5 minutes ago       Running             registry                                 0                   80aed61619aae       registry-6fb4cdfc84-lmclw
	76cf259c3e3f3       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   dff35a9614bc2       snapshot-controller-56fcc65765-f8vft
	6cbbf028b6ffe       296b5f799fcd8       5 minutes ago       Exited              create                                   0                   bee8da43a3081       ingress-nginx-admission-create-c8zwh
	e2c9b7a1c118d       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   c5ab3433f1f2b       registry-proxy-sc6vh
	22f977971a0b6       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   a09c6c8dc2026       metrics-server-8988944d9-xtzcw
	e0cd94bc0e021       77bdba588b953       5 minutes ago       Running             yakd                                     0                   e7a609dc8bd19       yakd-dashboard-67d98fc6b-4fs9w
	f5644be82b361       2437cf7621777       5 minutes ago       Running             coredns                                  0                   dacdc5133a049       coredns-6f6b679f8f-sm94s
	4ada8709f0b81       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   219e8d62c5d68       nvidia-device-plugin-daemonset-jbhnv
	d80a5a30af5a7       53af6e2c4c343       5 minutes ago       Running             cloud-spanner-emulator                   0                   1af41ed606a7f       cloud-spanner-emulator-c4bc9b5f8-gcfgv
	7b678ae62877f       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   ba0cd342fed8f       kube-ingress-dns-minikube
	9bd62c5cc823a       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   e3410d19b4f95       storage-provisioner
	75bdf321fc948       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   f07411cbb2a70       kindnet-nw4qk
	490ba9258921e       71d55d66fd4ee       5 minutes ago       Running             kube-proxy                               0                   fd84acd8f86b8       kube-proxy-8tt9j
	8d13892b135b5       27e3830e14027       6 minutes ago       Running             etcd                                     0                   eba83668c40ee       etcd-addons-428464
	ec6c8c4665cc9       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   b6e541284fa63       kube-apiserver-addons-428464
	64d68d8225b93       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   df709cd6891b4       kube-scheduler-addons-428464
	bfc5b46cd8cc4       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   059e6b9ba29bc       kube-controller-manager-addons-428464
	
	
	==> containerd <==
	Aug 15 00:40:12 addons-428464 containerd[811]: time="2024-08-15T00:40:12.925787397Z" level=info msg="RemovePodSandbox \"123ef624963acefc4e7d183961fa8e842b403b3e46913699bf30bb06385f6686\" returns successfully"
	Aug 15 00:40:54 addons-428464 containerd[811]: time="2024-08-15T00:40:54.791292953Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\""
	Aug 15 00:40:54 addons-428464 containerd[811]: time="2024-08-15T00:40:54.942021430Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 15 00:40:54 addons-428464 containerd[811]: time="2024-08-15T00:40:54.943465889Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: active requests=0, bytes read=89"
	Aug 15 00:40:54 addons-428464 containerd[811]: time="2024-08-15T00:40:54.948534354Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" with image id \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\", size \"69907666\" in 157.189085ms"
	Aug 15 00:40:54 addons-428464 containerd[811]: time="2024-08-15T00:40:54.948619744Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" returns image reference \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\""
	Aug 15 00:40:54 addons-428464 containerd[811]: time="2024-08-15T00:40:54.953615495Z" level=info msg="CreateContainer within sandbox \"fba700026893001613cccb53b68258826e46fd480146382085f2496fb2b0051c\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 15 00:40:54 addons-428464 containerd[811]: time="2024-08-15T00:40:54.975163178Z" level=info msg="CreateContainer within sandbox \"fba700026893001613cccb53b68258826e46fd480146382085f2496fb2b0051c\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764\""
	Aug 15 00:40:54 addons-428464 containerd[811]: time="2024-08-15T00:40:54.976000853Z" level=info msg="StartContainer for \"475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764\""
	Aug 15 00:40:55 addons-428464 containerd[811]: time="2024-08-15T00:40:55.047721632Z" level=info msg="StartContainer for \"475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764\" returns successfully"
	Aug 15 00:40:56 addons-428464 containerd[811]: time="2024-08-15T00:40:56.361141673Z" level=info msg="shim disconnected" id=475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764 namespace=k8s.io
	Aug 15 00:40:56 addons-428464 containerd[811]: time="2024-08-15T00:40:56.361206353Z" level=warning msg="cleaning up after shim disconnected" id=475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764 namespace=k8s.io
	Aug 15 00:40:56 addons-428464 containerd[811]: time="2024-08-15T00:40:56.361217488Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 15 00:40:56 addons-428464 containerd[811]: time="2024-08-15T00:40:56.968236704Z" level=info msg="RemoveContainer for \"5dd237d33fddcd9eb9dc1d3fb300cdeca20bbe2af02571851f3f02ecaf658212\""
	Aug 15 00:40:56 addons-428464 containerd[811]: time="2024-08-15T00:40:56.977020504Z" level=info msg="RemoveContainer for \"5dd237d33fddcd9eb9dc1d3fb300cdeca20bbe2af02571851f3f02ecaf658212\" returns successfully"
	Aug 15 00:41:12 addons-428464 containerd[811]: time="2024-08-15T00:41:12.929971776Z" level=info msg="RemoveContainer for \"4ff0aa74398843abf35f890703c0b5a93bbba9d97ebf4e631c3eaa7b22fab180\""
	Aug 15 00:41:12 addons-428464 containerd[811]: time="2024-08-15T00:41:12.936752285Z" level=info msg="RemoveContainer for \"4ff0aa74398843abf35f890703c0b5a93bbba9d97ebf4e631c3eaa7b22fab180\" returns successfully"
	Aug 15 00:41:12 addons-428464 containerd[811]: time="2024-08-15T00:41:12.938566958Z" level=info msg="StopPodSandbox for \"cd9fbe21f8d2b7b1df3326af5e21255fe1726c061e6803ade080347ffb9cbf69\""
	Aug 15 00:41:12 addons-428464 containerd[811]: time="2024-08-15T00:41:12.956101254Z" level=info msg="TearDown network for sandbox \"cd9fbe21f8d2b7b1df3326af5e21255fe1726c061e6803ade080347ffb9cbf69\" successfully"
	Aug 15 00:41:12 addons-428464 containerd[811]: time="2024-08-15T00:41:12.956153734Z" level=info msg="StopPodSandbox for \"cd9fbe21f8d2b7b1df3326af5e21255fe1726c061e6803ade080347ffb9cbf69\" returns successfully"
	Aug 15 00:41:12 addons-428464 containerd[811]: time="2024-08-15T00:41:12.956992665Z" level=info msg="RemovePodSandbox for \"cd9fbe21f8d2b7b1df3326af5e21255fe1726c061e6803ade080347ffb9cbf69\""
	Aug 15 00:41:12 addons-428464 containerd[811]: time="2024-08-15T00:41:12.957036423Z" level=info msg="Forcibly stopping sandbox \"cd9fbe21f8d2b7b1df3326af5e21255fe1726c061e6803ade080347ffb9cbf69\""
	Aug 15 00:41:12 addons-428464 containerd[811]: time="2024-08-15T00:41:12.974273432Z" level=info msg="TearDown network for sandbox \"cd9fbe21f8d2b7b1df3326af5e21255fe1726c061e6803ade080347ffb9cbf69\" successfully"
	Aug 15 00:41:12 addons-428464 containerd[811]: time="2024-08-15T00:41:12.980820981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd9fbe21f8d2b7b1df3326af5e21255fe1726c061e6803ade080347ffb9cbf69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 15 00:41:12 addons-428464 containerd[811]: time="2024-08-15T00:41:12.980964275Z" level=info msg="RemovePodSandbox \"cd9fbe21f8d2b7b1df3326af5e21255fe1726c061e6803ade080347ffb9cbf69\" returns successfully"
	
	
	==> coredns [f5644be82b361ed20165d69bd8e189e771f34ef84d7d4c01bd0f3e4c4c3e540b] <==
	[INFO] 10.244.0.6:58283 - 19881 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043339s
	[INFO] 10.244.0.6:35019 - 39347 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002202201s
	[INFO] 10.244.0.6:35019 - 16816 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001323401s
	[INFO] 10.244.0.6:36957 - 17482 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072484s
	[INFO] 10.244.0.6:36957 - 20044 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082962s
	[INFO] 10.244.0.6:42744 - 16515 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089033s
	[INFO] 10.244.0.6:42744 - 50822 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000030655s
	[INFO] 10.244.0.6:37506 - 28238 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009787s
	[INFO] 10.244.0.6:37506 - 12877 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000036792s
	[INFO] 10.244.0.6:59649 - 24256 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102383s
	[INFO] 10.244.0.6:59649 - 15070 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000030318s
	[INFO] 10.244.0.6:52155 - 47375 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002060531s
	[INFO] 10.244.0.6:52155 - 12016 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002092441s
	[INFO] 10.244.0.6:42433 - 17604 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077505s
	[INFO] 10.244.0.6:42433 - 10178 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103655s
	[INFO] 10.244.0.24:36982 - 64310 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000715517s
	[INFO] 10.244.0.24:38437 - 54668 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00057631s
	[INFO] 10.244.0.24:41732 - 25074 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000174523s
	[INFO] 10.244.0.24:50043 - 52310 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0000725s
	[INFO] 10.244.0.24:34754 - 64201 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091102s
	[INFO] 10.244.0.24:45362 - 21789 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059626s
	[INFO] 10.244.0.24:52932 - 54864 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001998377s
	[INFO] 10.244.0.24:59822 - 49580 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001594071s
	[INFO] 10.244.0.24:33434 - 23058 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002282298s
	[INFO] 10.244.0.24:44683 - 55943 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002520361s
	
	
	==> describe nodes <==
	Name:               addons-428464
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-428464
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=addons-428464
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_37_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-428464
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-428464"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-428464
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:43:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:40:17 +0000   Thu, 15 Aug 2024 00:37:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:40:17 +0000   Thu, 15 Aug 2024 00:37:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:40:17 +0000   Thu, 15 Aug 2024 00:37:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:40:17 +0000   Thu, 15 Aug 2024 00:37:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-428464
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a73c7e077f34dbcbe8e48a99cabd22e
	  System UUID:                eb7990ff-e482-4958-aac1-10a23160c9bd
	  Boot ID:                    ea2065b4-362f-4442-9b74-bf31c8d731d6
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-c4bc9b5f8-gcfgv       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  gadget                      gadget-qnvvp                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  gcp-auth                    gcp-auth-89d5ffd79-9tlrx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  ingress-nginx               ingress-nginx-controller-7559cbf597-kmcmd    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         5m52s
	  kube-system                 coredns-6f6b679f8f-sm94s                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 csi-hostpathplugin-hblh7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 etcd-addons-428464                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m5s
	  kube-system                 kindnet-nw4qk                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m1s
	  kube-system                 kube-apiserver-addons-428464                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-controller-manager-addons-428464        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-8tt9j                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-scheduler-addons-428464                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 metrics-server-8988944d9-xtzcw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m54s
	  kube-system                 nvidia-device-plugin-daemonset-jbhnv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 registry-6fb4cdfc84-lmclw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 registry-proxy-sc6vh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-f8vft         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 snapshot-controller-56fcc65765-hsqlc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  local-path-storage          local-path-provisioner-86d989889c-ggccd      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  volcano-system              volcano-admission-77d7d48b68-bgbq6           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  volcano-system              volcano-controllers-56675bb4d5-wvj4z         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  volcano-system              volcano-scheduler-576bc46687-xggvj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-4fs9w               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  100m (5%!)(MISSING)
	  memory             638Mi (8%!)(MISSING)   476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m59s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  6m12s (x8 over 6m12s)  kubelet          Node addons-428464 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m12s (x7 over 6m12s)  kubelet          Node addons-428464 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m12s (x7 over 6m12s)  kubelet          Node addons-428464 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m6s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m6s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m5s                   kubelet          Node addons-428464 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m5s                   kubelet          Node addons-428464 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m5s                   kubelet          Node addons-428464 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m2s                   node-controller  Node addons-428464 event: Registered Node addons-428464 in Controller
	
	
	==> dmesg <==
	[Aug14 23:44] hrtimer: interrupt took 27449952 ns
	
	
	==> etcd [8d13892b135b5638a6b7f3dee206d1baead4e7cc2246207a7d38f786238a525a] <==
	{"level":"info","ts":"2024-08-15T00:37:07.000179Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T00:37:07.004059Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T00:37:07.005856Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T00:37:07.005941Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-15T00:37:07.010389Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-15T00:37:07.251881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-15T00:37:07.252081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-15T00:37:07.252195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-15T00:37:07.252298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-15T00:37:07.252382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-15T00:37:07.252464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-15T00:37:07.252551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-15T00:37:07.256009Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-428464 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T00:37:07.256282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T00:37:07.256633Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T00:37:07.257433Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T00:37:07.259904Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T00:37:07.260857Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T00:37:07.264489Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T00:37:07.272745Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-15T00:37:07.273288Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T00:37:07.279988Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T00:37:07.280135Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T00:37:07.279896Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T00:37:07.280287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [9e8864f5a1974ff120bf02e0301fe0f733b33a2927f5acbf025e785be770468a] <==
	2024/08/15 00:39:58 GCP Auth Webhook started!
	2024/08/15 00:40:15 Ready to marshal response ...
	2024/08/15 00:40:15 Ready to write response ...
	2024/08/15 00:40:16 Ready to marshal response ...
	2024/08/15 00:40:16 Ready to write response ...
	
	
	==> kernel <==
	 00:43:18 up  4:25,  0 users,  load average: 0.16, 1.22, 2.14
	Linux addons-428464 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [75bdf321fc94827d0480c5b0ce2f7481d8bb2405fec1d3d5d322e8f5f2e60932] <==
	I0815 00:42:01.679840       1 main.go:299] handling current node
	I0815 00:42:11.679902       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:42:11.679995       1 main.go:299] handling current node
	W0815 00:42:17.551431       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:42:17.551468       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0815 00:42:21.450827       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:42:21.450936       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 00:42:21.679835       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:42:21.679910       1 main.go:299] handling current node
	I0815 00:42:31.680233       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:42:31.680272       1 main.go:299] handling current node
	W0815 00:42:41.667589       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 00:42:41.667634       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 00:42:41.679667       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:42:41.679702       1 main.go:299] handling current node
	I0815 00:42:51.679710       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:42:51.679746       1 main.go:299] handling current node
	I0815 00:43:01.680243       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:43:01.680277       1 main.go:299] handling current node
	I0815 00:43:11.680194       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:43:11.680232       1 main.go:299] handling current node
	W0815 00:43:14.371000       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:43:14.371041       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0815 00:43:16.457582       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:43:16.457824       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kube-apiserver [ec6c8c4665cc9e0a46025f7a16baf78f315da6b3df178017ccf5be80adce7f87] <==
	W0815 00:38:27.479807       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:28.562296       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:29.566392       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:30.570595       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:31.603710       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:32.650980       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:33.400727       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.231.152:443: connect: connection refused
	E0815 00:38:33.400780       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.231.152:443: connect: connection refused" logger="UnhandledError"
	W0815 00:38:33.402299       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:33.467086       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.231.152:443: connect: connection refused
	E0815 00:38:33.467129       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.231.152:443: connect: connection refused" logger="UnhandledError"
	W0815 00:38:33.468914       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:33.655744       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:34.672813       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:35.696789       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:36.732706       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:37.809317       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.100.185:443: connect: connection refused
	W0815 00:38:52.370894       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.231.152:443: connect: connection refused
	E0815 00:38:52.370932       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.231.152:443: connect: connection refused" logger="UnhandledError"
	W0815 00:39:33.411063       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.231.152:443: connect: connection refused
	E0815 00:39:33.411104       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.231.152:443: connect: connection refused" logger="UnhandledError"
	W0815 00:39:33.475815       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.231.152:443: connect: connection refused
	E0815 00:39:33.475929       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.231.152:443: connect: connection refused" logger="UnhandledError"
	I0815 00:40:15.840748       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0815 00:40:15.887553       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [bfc5b46cd8cc44611aa58ba007b99b3122804e81632c3ceb05e776ea9c7f73e4] <==
	I0815 00:39:33.427328       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 00:39:33.433140       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 00:39:33.446662       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 00:39:33.487009       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 00:39:33.500205       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 00:39:33.500320       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 00:39:33.510766       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 00:39:34.723749       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 00:39:34.741438       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 00:39:35.855214       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 00:39:35.878633       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 00:39:36.862591       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 00:39:36.872126       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 00:39:36.879501       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 00:39:36.885786       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 00:39:36.894479       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 00:39:36.903345       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 00:39:58.821736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="10.654138ms"
	I0815 00:39:58.822063       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="130.978µs"
	I0815 00:40:06.031320       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0815 00:40:06.034600       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0815 00:40:06.094222       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0815 00:40:06.094407       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0815 00:40:15.536466       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0815 00:40:17.072449       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-428464"
	
	
	==> kube-proxy [490ba9258921e9ffc906b00640a03eef2aa973f421267572e7cb2a8516fdb3ca] <==
	I0815 00:37:18.807155       1 server_linux.go:66] "Using iptables proxy"
	I0815 00:37:18.920074       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0815 00:37:18.920188       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:37:18.983005       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0815 00:37:18.983070       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:37:18.985379       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:37:18.985879       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:37:18.985895       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:37:18.987581       1 config.go:197] "Starting service config controller"
	I0815 00:37:18.987602       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:37:18.987627       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:37:18.987631       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:37:18.996520       1 config.go:326] "Starting node config controller"
	I0815 00:37:18.996542       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:37:19.088167       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:37:19.088219       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:37:19.096641       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [64d68d8225b93c357c1c0cf46917bbfff185b3ef2e5fb92560b97a2031353b41] <==
	W0815 00:37:10.152188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:37:10.152426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:37:10.153726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 00:37:10.153867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:37:10.969987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 00:37:10.970256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:37:10.985052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 00:37:10.985094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:37:11.007700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 00:37:11.007824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:37:11.056635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 00:37:11.056978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:37:11.124055       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 00:37:11.124204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 00:37:11.134105       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:37:11.134232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:37:11.147662       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:37:11.147891       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 00:37:11.227223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 00:37:11.227489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:37:11.294035       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:37:11.294159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:37:11.311670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 00:37:11.311928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 00:37:14.325226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 00:41:14 addons-428464 kubelet[1486]: I0815 00:41:14.792393    1486 scope.go:117] "RemoveContainer" containerID="475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764"
	Aug 15 00:41:14 addons-428464 kubelet[1486]: E0815 00:41:14.793033    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qnvvp_gadget(369506d7-046c-468c-be58-27dc36f7ae0f)\"" pod="gadget/gadget-qnvvp" podUID="369506d7-046c-468c-be58-27dc36f7ae0f"
	Aug 15 00:41:22 addons-428464 kubelet[1486]: I0815 00:41:22.790726    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-sc6vh" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 00:41:28 addons-428464 kubelet[1486]: I0815 00:41:28.789966    1486 scope.go:117] "RemoveContainer" containerID="475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764"
	Aug 15 00:41:28 addons-428464 kubelet[1486]: E0815 00:41:28.790612    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qnvvp_gadget(369506d7-046c-468c-be58-27dc36f7ae0f)\"" pod="gadget/gadget-qnvvp" podUID="369506d7-046c-468c-be58-27dc36f7ae0f"
	Aug 15 00:41:33 addons-428464 kubelet[1486]: I0815 00:41:33.790391    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jbhnv" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 00:41:42 addons-428464 kubelet[1486]: I0815 00:41:42.790647    1486 scope.go:117] "RemoveContainer" containerID="475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764"
	Aug 15 00:41:42 addons-428464 kubelet[1486]: E0815 00:41:42.791754    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qnvvp_gadget(369506d7-046c-468c-be58-27dc36f7ae0f)\"" pod="gadget/gadget-qnvvp" podUID="369506d7-046c-468c-be58-27dc36f7ae0f"
	Aug 15 00:41:53 addons-428464 kubelet[1486]: I0815 00:41:53.789965    1486 scope.go:117] "RemoveContainer" containerID="475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764"
	Aug 15 00:41:53 addons-428464 kubelet[1486]: E0815 00:41:53.790158    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qnvvp_gadget(369506d7-046c-468c-be58-27dc36f7ae0f)\"" pod="gadget/gadget-qnvvp" podUID="369506d7-046c-468c-be58-27dc36f7ae0f"
	Aug 15 00:41:56 addons-428464 kubelet[1486]: I0815 00:41:56.790052    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-lmclw" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 00:42:07 addons-428464 kubelet[1486]: I0815 00:42:07.789836    1486 scope.go:117] "RemoveContainer" containerID="475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764"
	Aug 15 00:42:07 addons-428464 kubelet[1486]: E0815 00:42:07.790059    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qnvvp_gadget(369506d7-046c-468c-be58-27dc36f7ae0f)\"" pod="gadget/gadget-qnvvp" podUID="369506d7-046c-468c-be58-27dc36f7ae0f"
	Aug 15 00:42:20 addons-428464 kubelet[1486]: I0815 00:42:20.790705    1486 scope.go:117] "RemoveContainer" containerID="475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764"
	Aug 15 00:42:20 addons-428464 kubelet[1486]: E0815 00:42:20.791521    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qnvvp_gadget(369506d7-046c-468c-be58-27dc36f7ae0f)\"" pod="gadget/gadget-qnvvp" podUID="369506d7-046c-468c-be58-27dc36f7ae0f"
	Aug 15 00:42:32 addons-428464 kubelet[1486]: I0815 00:42:32.790516    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-sc6vh" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 00:42:33 addons-428464 kubelet[1486]: I0815 00:42:33.789807    1486 scope.go:117] "RemoveContainer" containerID="475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764"
	Aug 15 00:42:33 addons-428464 kubelet[1486]: E0815 00:42:33.790013    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qnvvp_gadget(369506d7-046c-468c-be58-27dc36f7ae0f)\"" pod="gadget/gadget-qnvvp" podUID="369506d7-046c-468c-be58-27dc36f7ae0f"
	Aug 15 00:42:36 addons-428464 kubelet[1486]: I0815 00:42:36.790181    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jbhnv" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 00:42:45 addons-428464 kubelet[1486]: I0815 00:42:45.789978    1486 scope.go:117] "RemoveContainer" containerID="475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764"
	Aug 15 00:42:45 addons-428464 kubelet[1486]: E0815 00:42:45.790194    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qnvvp_gadget(369506d7-046c-468c-be58-27dc36f7ae0f)\"" pod="gadget/gadget-qnvvp" podUID="369506d7-046c-468c-be58-27dc36f7ae0f"
	Aug 15 00:42:59 addons-428464 kubelet[1486]: I0815 00:42:59.789670    1486 scope.go:117] "RemoveContainer" containerID="475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764"
	Aug 15 00:42:59 addons-428464 kubelet[1486]: E0815 00:42:59.789906    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qnvvp_gadget(369506d7-046c-468c-be58-27dc36f7ae0f)\"" pod="gadget/gadget-qnvvp" podUID="369506d7-046c-468c-be58-27dc36f7ae0f"
	Aug 15 00:43:14 addons-428464 kubelet[1486]: I0815 00:43:14.790571    1486 scope.go:117] "RemoveContainer" containerID="475f6690fb580fe4cbcbfc3b5e6a459b9032a7023107c5eea41cfdc05144f764"
	Aug 15 00:43:14 addons-428464 kubelet[1486]: E0815 00:43:14.791290    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qnvvp_gadget(369506d7-046c-468c-be58-27dc36f7ae0f)\"" pod="gadget/gadget-qnvvp" podUID="369506d7-046c-468c-be58-27dc36f7ae0f"
	
	
	==> storage-provisioner [9bd62c5cc823ac559e537d4c4d7d67319f7bf38c89f3c13d649cc620d0d8e0eb] <==
	I0815 00:37:23.466855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 00:37:23.490872       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 00:37:23.490940       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 00:37:23.501200       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 00:37:23.503928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"76cc3e6c-8b41-4b12-ae1e-e49fdb82075a", APIVersion:"v1", ResourceVersion:"529", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-428464_922c6fb0-0291-446a-b682-d90a59711001 became leader
	I0815 00:37:23.504128       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-428464_922c6fb0-0291-446a-b682-d90a59711001!
	I0815 00:37:23.604756       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-428464_922c6fb0-0291-446a-b682-d90a59711001!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-428464 -n addons-428464
helpers_test.go:261: (dbg) Run:  kubectl --context addons-428464 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-c8zwh ingress-nginx-admission-patch-29h5c test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-428464 describe pod ingress-nginx-admission-create-c8zwh ingress-nginx-admission-patch-29h5c test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-428464 describe pod ingress-nginx-admission-create-c8zwh ingress-nginx-admission-patch-29h5c test-job-nginx-0: exit status 1 (85.46397ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-c8zwh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-29h5c" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-428464 describe pod ingress-nginx-admission-create-c8zwh ingress-nginx-admission-patch-29h5c test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (381.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-145466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0815 01:27:05.963939  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-145466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m18.345860387s)

                                                
                                                
-- stdout --
	* [old-k8s-version-145466] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-145466" primary control-plane node in "old-k8s-version-145466" cluster
	* Pulling base image v0.0.44-1723650208-19443 ...
	* Restarting existing docker container for "old-k8s-version-145466" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-145466 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:25:59.573951  798414 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:25:59.574084  798414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:25:59.574095  798414 out.go:304] Setting ErrFile to fd 2...
	I0815 01:25:59.574100  798414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:25:59.574334  798414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	I0815 01:25:59.574709  798414 out.go:298] Setting JSON to false
	I0815 01:25:59.575653  798414 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18482,"bootTime":1723666678,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 01:25:59.575726  798414 start.go:139] virtualization:  
	I0815 01:25:59.578147  798414 out.go:177] * [old-k8s-version-145466] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 01:25:59.580624  798414 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:25:59.580809  798414 notify.go:220] Checking for updates...
	I0815 01:25:59.584160  798414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:25:59.585694  798414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 01:25:59.587773  798414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	I0815 01:25:59.589455  798414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 01:25:59.591443  798414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:25:59.593889  798414 config.go:182] Loaded profile config "old-k8s-version-145466": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0815 01:25:59.596194  798414 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 01:25:59.597841  798414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:25:59.621518  798414 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 01:25:59.621657  798414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 01:25:59.690023  798414 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-15 01:25:59.67772726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 01:25:59.690140  798414 docker.go:307] overlay module found
	I0815 01:25:59.692335  798414 out.go:177] * Using the docker driver based on existing profile
	I0815 01:25:59.694336  798414 start.go:297] selected driver: docker
	I0815 01:25:59.694353  798414 start.go:901] validating driver "docker" against &{Name:old-k8s-version-145466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145466 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:25:59.694546  798414 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:25:59.695179  798414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 01:25:59.751775  798414 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-15 01:25:59.742078871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 01:25:59.752151  798414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:25:59.752188  798414 cni.go:84] Creating CNI manager for ""
	I0815 01:25:59.752197  798414 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 01:25:59.752246  798414 start.go:340] cluster config:
	{Name:old-k8s-version-145466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:25:59.754616  798414 out.go:177] * Starting "old-k8s-version-145466" primary control-plane node in "old-k8s-version-145466" cluster
	I0815 01:25:59.756482  798414 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0815 01:25:59.758412  798414 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 01:25:59.760008  798414 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0815 01:25:59.760073  798414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-587265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0815 01:25:59.760096  798414 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 01:25:59.760101  798414 cache.go:56] Caching tarball of preloaded images
	I0815 01:25:59.760190  798414 preload.go:172] Found /home/jenkins/minikube-integration/19443-587265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 01:25:59.760201  798414 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0815 01:25:59.760318  798414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/config.json ...
	W0815 01:25:59.787246  798414 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 is of wrong architecture
	I0815 01:25:59.787268  798414 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 01:25:59.787353  798414 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 01:25:59.787378  798414 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 01:25:59.787383  798414 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 01:25:59.787397  798414 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 01:25:59.787409  798414 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 01:25:59.922097  798414 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 01:25:59.922134  798414 cache.go:194] Successfully downloaded all kic artifacts
	I0815 01:25:59.922173  798414 start.go:360] acquireMachinesLock for old-k8s-version-145466: {Name:mk4e9cba23400dab2e3a7a2919cb59bfc861e072 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:25:59.922243  798414 start.go:364] duration metric: took 45.063µs to acquireMachinesLock for "old-k8s-version-145466"
	I0815 01:25:59.922271  798414 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:25:59.922277  798414 fix.go:54] fixHost starting: 
	I0815 01:25:59.922587  798414 cli_runner.go:164] Run: docker container inspect old-k8s-version-145466 --format={{.State.Status}}
	I0815 01:25:59.938278  798414 fix.go:112] recreateIfNeeded on old-k8s-version-145466: state=Stopped err=<nil>
	W0815 01:25:59.938306  798414 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:25:59.940644  798414 out.go:177] * Restarting existing docker container for "old-k8s-version-145466" ...
	I0815 01:25:59.942493  798414 cli_runner.go:164] Run: docker start old-k8s-version-145466
	I0815 01:26:00.522118  798414 cli_runner.go:164] Run: docker container inspect old-k8s-version-145466 --format={{.State.Status}}
	I0815 01:26:00.552318  798414 kic.go:430] container "old-k8s-version-145466" state is running.
	I0815 01:26:00.552782  798414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145466
	I0815 01:26:00.585700  798414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/config.json ...
	I0815 01:26:00.586155  798414 machine.go:94] provisionDockerMachine start ...
	I0815 01:26:00.586229  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:00.606533  798414 main.go:141] libmachine: Using SSH client type: native
	I0815 01:26:00.606834  798414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33805 <nil> <nil>}
	I0815 01:26:00.606846  798414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:26:00.608488  798414 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0815 01:26:03.747543  798414 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-145466
	
	I0815 01:26:03.747568  798414 ubuntu.go:169] provisioning hostname "old-k8s-version-145466"
	I0815 01:26:03.747648  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:03.766016  798414 main.go:141] libmachine: Using SSH client type: native
	I0815 01:26:03.766303  798414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33805 <nil> <nil>}
	I0815 01:26:03.766319  798414 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-145466 && echo "old-k8s-version-145466" | sudo tee /etc/hostname
	I0815 01:26:03.915942  798414 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-145466
	
	I0815 01:26:03.916027  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:03.936903  798414 main.go:141] libmachine: Using SSH client type: native
	I0815 01:26:03.937222  798414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33805 <nil> <nil>}
	I0815 01:26:03.937246  798414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-145466' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-145466/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-145466' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:26:04.076172  798414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:26:04.076201  798414 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19443-587265/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-587265/.minikube}
	I0815 01:26:04.076241  798414 ubuntu.go:177] setting up certificates
	I0815 01:26:04.076252  798414 provision.go:84] configureAuth start
	I0815 01:26:04.076323  798414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145466
	I0815 01:26:04.095332  798414 provision.go:143] copyHostCerts
	I0815 01:26:04.095406  798414 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-587265/.minikube/key.pem, removing ...
	I0815 01:26:04.095423  798414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-587265/.minikube/key.pem
	I0815 01:26:04.095515  798414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-587265/.minikube/key.pem (1675 bytes)
	I0815 01:26:04.095619  798414 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-587265/.minikube/ca.pem, removing ...
	I0815 01:26:04.095629  798414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-587265/.minikube/ca.pem
	I0815 01:26:04.095659  798414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-587265/.minikube/ca.pem (1082 bytes)
	I0815 01:26:04.095723  798414 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-587265/.minikube/cert.pem, removing ...
	I0815 01:26:04.095733  798414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-587265/.minikube/cert.pem
	I0815 01:26:04.095760  798414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-587265/.minikube/cert.pem (1123 bytes)
	I0815 01:26:04.095817  798414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-587265/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-145466 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-145466]
	I0815 01:26:04.816454  798414 provision.go:177] copyRemoteCerts
	I0815 01:26:04.816566  798414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:26:04.816627  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:04.865610  798414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/old-k8s-version-145466/id_rsa Username:docker}
	I0815 01:26:04.961731  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 01:26:04.986962  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 01:26:05.022151  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:26:05.048703  798414 provision.go:87] duration metric: took 972.433008ms to configureAuth
	I0815 01:26:05.048731  798414 ubuntu.go:193] setting minikube options for container-runtime
	I0815 01:26:05.048934  798414 config.go:182] Loaded profile config "old-k8s-version-145466": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0815 01:26:05.048950  798414 machine.go:97] duration metric: took 4.462769778s to provisionDockerMachine
	I0815 01:26:05.048958  798414 start.go:293] postStartSetup for "old-k8s-version-145466" (driver="docker")
	I0815 01:26:05.048968  798414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:26:05.049021  798414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:26:05.049063  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:05.072013  798414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/old-k8s-version-145466/id_rsa Username:docker}
	I0815 01:26:05.170225  798414 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:26:05.173544  798414 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 01:26:05.173583  798414 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 01:26:05.173593  798414 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 01:26:05.173601  798414 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 01:26:05.173612  798414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-587265/.minikube/addons for local assets ...
	I0815 01:26:05.173671  798414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-587265/.minikube/files for local assets ...
	I0815 01:26:05.173762  798414 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-587265/.minikube/files/etc/ssl/certs/5926602.pem -> 5926602.pem in /etc/ssl/certs
	I0815 01:26:05.173877  798414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:26:05.182899  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/files/etc/ssl/certs/5926602.pem --> /etc/ssl/certs/5926602.pem (1708 bytes)
	I0815 01:26:05.209284  798414 start.go:296] duration metric: took 160.311879ms for postStartSetup
	I0815 01:26:05.209368  798414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 01:26:05.209415  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:05.226804  798414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/old-k8s-version-145466/id_rsa Username:docker}
	I0815 01:26:05.320876  798414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 01:26:05.325268  798414 fix.go:56] duration metric: took 5.40298248s for fixHost
	I0815 01:26:05.325293  798414 start.go:83] releasing machines lock for "old-k8s-version-145466", held for 5.403033705s
	I0815 01:26:05.325362  798414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145466
	I0815 01:26:05.341534  798414 ssh_runner.go:195] Run: cat /version.json
	I0815 01:26:05.341595  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:05.341594  798414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:26:05.341750  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:05.358372  798414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/old-k8s-version-145466/id_rsa Username:docker}
	I0815 01:26:05.373543  798414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/old-k8s-version-145466/id_rsa Username:docker}
	I0815 01:26:05.585439  798414 ssh_runner.go:195] Run: systemctl --version
	I0815 01:26:05.589817  798414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 01:26:05.594226  798414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0815 01:26:05.611706  798414 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0815 01:26:05.611784  798414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:26:05.620827  798414 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 01:26:05.620867  798414 start.go:495] detecting cgroup driver to use...
	I0815 01:26:05.620904  798414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 01:26:05.620970  798414 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 01:26:05.634481  798414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 01:26:05.647095  798414 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:26:05.647166  798414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:26:05.660789  798414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:26:05.672598  798414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:26:05.761732  798414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:26:05.846989  798414 docker.go:233] disabling docker service ...
	I0815 01:26:05.847057  798414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:26:05.860790  798414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:26:05.872605  798414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:26:05.968597  798414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:26:06.077959  798414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:26:06.090706  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:26:06.108435  798414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0815 01:26:06.119881  798414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 01:26:06.132368  798414 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 01:26:06.132441  798414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 01:26:06.142917  798414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 01:26:06.153383  798414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 01:26:06.163568  798414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 01:26:06.173539  798414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:26:06.184905  798414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 01:26:06.194797  798414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:26:06.203755  798414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:26:06.212910  798414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:26:06.328154  798414 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 01:26:06.757971  798414 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0815 01:26:06.758050  798414 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0815 01:26:06.764487  798414 start.go:563] Will wait 60s for crictl version
	I0815 01:26:06.764550  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:26:06.768112  798414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:26:06.826837  798414 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0815 01:26:06.826906  798414 ssh_runner.go:195] Run: containerd --version
	I0815 01:26:06.853824  798414 ssh_runner.go:195] Run: containerd --version
	I0815 01:26:06.890136  798414 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	I0815 01:26:06.892071  798414 cli_runner.go:164] Run: docker network inspect old-k8s-version-145466 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 01:26:06.920269  798414 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0815 01:26:06.924190  798414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:26:06.938830  798414 kubeadm.go:883] updating cluster {Name:old-k8s-version-145466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145466 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:26:06.938954  798414 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0815 01:26:06.939012  798414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:26:06.986729  798414 containerd.go:627] all images are preloaded for containerd runtime.
	I0815 01:26:06.986752  798414 containerd.go:534] Images already preloaded, skipping extraction
	I0815 01:26:06.986815  798414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:26:07.083046  798414 containerd.go:627] all images are preloaded for containerd runtime.
	I0815 01:26:07.083072  798414 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:26:07.083080  798414 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0815 01:26:07.083202  798414 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-145466 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:26:07.083277  798414 ssh_runner.go:195] Run: sudo crictl info
	I0815 01:26:07.166482  798414 cni.go:84] Creating CNI manager for ""
	I0815 01:26:07.166511  798414 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 01:26:07.166522  798414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:26:07.166574  798414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-145466 NodeName:old-k8s-version-145466 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 01:26:07.166755  798414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-145466"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:26:07.166840  798414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 01:26:07.177168  798414 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:26:07.177242  798414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:26:07.186863  798414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0815 01:26:07.206656  798414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:26:07.226261  798414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0815 01:26:07.246084  798414 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0815 01:26:07.249707  798414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:26:07.261141  798414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:26:07.376798  798414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:26:07.400340  798414 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466 for IP: 192.168.85.2
	I0815 01:26:07.400360  798414 certs.go:194] generating shared ca certs ...
	I0815 01:26:07.400382  798414 certs.go:226] acquiring lock for ca certs: {Name:mkd44da6bd4b219dfe871c9c58d5756252de3a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:26:07.400542  798414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-587265/.minikube/ca.key
	I0815 01:26:07.400613  798414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.key
	I0815 01:26:07.400627  798414 certs.go:256] generating profile certs ...
	I0815 01:26:07.400741  798414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.key
	I0815 01:26:07.400828  798414 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/apiserver.key.ae927303
	I0815 01:26:07.400905  798414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/proxy-client.key
	I0815 01:26:07.401049  798414 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/592660.pem (1338 bytes)
	W0815 01:26:07.401101  798414 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-587265/.minikube/certs/592660_empty.pem, impossibly tiny 0 bytes
	I0815 01:26:07.401115  798414 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 01:26:07.401158  798414 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem (1082 bytes)
	I0815 01:26:07.401205  798414 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:26:07.401236  798414 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/key.pem (1675 bytes)
	I0815 01:26:07.401297  798414 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/files/etc/ssl/certs/5926602.pem (1708 bytes)
	I0815 01:26:07.402037  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:26:07.490895  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 01:26:07.561737  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:26:07.629284  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:26:07.695983  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 01:26:07.732866  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 01:26:07.770035  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:26:07.807374  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:26:07.844306  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/files/etc/ssl/certs/5926602.pem --> /usr/share/ca-certificates/5926602.pem (1708 bytes)
	I0815 01:26:07.880814  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:26:07.918363  798414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/certs/592660.pem --> /usr/share/ca-certificates/592660.pem (1338 bytes)
	I0815 01:26:07.973325  798414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:26:08.021984  798414 ssh_runner.go:195] Run: openssl version
	I0815 01:26:08.031704  798414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5926602.pem && ln -fs /usr/share/ca-certificates/5926602.pem /etc/ssl/certs/5926602.pem"
	I0815 01:26:08.051374  798414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5926602.pem
	I0815 01:26:08.058833  798414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:47 /usr/share/ca-certificates/5926602.pem
	I0815 01:26:08.058939  798414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5926602.pem
	I0815 01:26:08.074680  798414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5926602.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:26:08.087311  798414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:26:08.099152  798414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:26:08.103350  798414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:36 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:26:08.103416  798414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:26:08.110812  798414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:26:08.126568  798414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/592660.pem && ln -fs /usr/share/ca-certificates/592660.pem /etc/ssl/certs/592660.pem"
	I0815 01:26:08.137368  798414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/592660.pem
	I0815 01:26:08.141357  798414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:47 /usr/share/ca-certificates/592660.pem
	I0815 01:26:08.141496  798414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/592660.pem
	I0815 01:26:08.149087  798414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/592660.pem /etc/ssl/certs/51391683.0"
	I0815 01:26:08.158887  798414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:26:08.162750  798414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:26:08.169988  798414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:26:08.177414  798414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:26:08.184743  798414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:26:08.192504  798414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:26:08.199918  798414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:26:08.207300  798414 kubeadm.go:392] StartCluster: {Name:old-k8s-version-145466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145466 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:08.207397  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0815 01:26:08.207453  798414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:26:08.260835  798414 cri.go:89] found id: "56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4"
	I0815 01:26:08.260862  798414 cri.go:89] found id: "8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1"
	I0815 01:26:08.260867  798414 cri.go:89] found id: "76c1204f9e705ed1d3d61aa06410f99c08c09543c18fb107bec9e21b0ffe3046"
	I0815 01:26:08.260870  798414 cri.go:89] found id: "8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5"
	I0815 01:26:08.260873  798414 cri.go:89] found id: "d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f"
	I0815 01:26:08.260877  798414 cri.go:89] found id: "f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d"
	I0815 01:26:08.260881  798414 cri.go:89] found id: "149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d"
	I0815 01:26:08.260884  798414 cri.go:89] found id: "d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f"
	I0815 01:26:08.260887  798414 cri.go:89] found id: ""
	I0815 01:26:08.260940  798414 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0815 01:26:08.274037  798414 cri.go:116] JSON = null
	W0815 01:26:08.274103  798414 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0815 01:26:08.274162  798414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:26:08.284122  798414 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:26:08.284139  798414 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:26:08.284190  798414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:26:08.293156  798414 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:26:08.293581  798414 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-145466" does not appear in /home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 01:26:08.293673  798414 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-587265/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-145466" cluster setting kubeconfig missing "old-k8s-version-145466" context setting]
	I0815 01:26:08.293931  798414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/kubeconfig: {Name:mka65351b6674d2edd84b4cf38d527ec03739af8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:26:08.295085  798414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:26:08.304747  798414 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0815 01:26:08.304777  798414 kubeadm.go:597] duration metric: took 20.632645ms to restartPrimaryControlPlane
	I0815 01:26:08.304786  798414 kubeadm.go:394] duration metric: took 97.501283ms to StartCluster
	I0815 01:26:08.304801  798414 settings.go:142] acquiring lock: {Name:mkf353d296e2684cbdd29a016c10a0eb45e9f213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:26:08.304853  798414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 01:26:08.305495  798414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/kubeconfig: {Name:mka65351b6674d2edd84b4cf38d527ec03739af8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:26:08.305682  798414 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0815 01:26:08.306095  798414 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:26:08.306170  798414 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-145466"
	I0815 01:26:08.306193  798414 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-145466"
	W0815 01:26:08.306199  798414 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:26:08.306221  798414 host.go:66] Checking if "old-k8s-version-145466" exists ...
	I0815 01:26:08.306682  798414 cli_runner.go:164] Run: docker container inspect old-k8s-version-145466 --format={{.State.Status}}
	I0815 01:26:08.307032  798414 config.go:182] Loaded profile config "old-k8s-version-145466": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0815 01:26:08.307096  798414 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-145466"
	I0815 01:26:08.307141  798414 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-145466"
	I0815 01:26:08.307425  798414 cli_runner.go:164] Run: docker container inspect old-k8s-version-145466 --format={{.State.Status}}
	I0815 01:26:08.307746  798414 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-145466"
	I0815 01:26:08.307771  798414 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-145466"
	W0815 01:26:08.307778  798414 addons.go:243] addon metrics-server should already be in state true
	I0815 01:26:08.307799  798414 host.go:66] Checking if "old-k8s-version-145466" exists ...
	I0815 01:26:08.308205  798414 cli_runner.go:164] Run: docker container inspect old-k8s-version-145466 --format={{.State.Status}}
	I0815 01:26:08.312167  798414 addons.go:69] Setting dashboard=true in profile "old-k8s-version-145466"
	I0815 01:26:08.312208  798414 addons.go:234] Setting addon dashboard=true in "old-k8s-version-145466"
	W0815 01:26:08.312215  798414 addons.go:243] addon dashboard should already be in state true
	I0815 01:26:08.312262  798414 host.go:66] Checking if "old-k8s-version-145466" exists ...
	I0815 01:26:08.312700  798414 cli_runner.go:164] Run: docker container inspect old-k8s-version-145466 --format={{.State.Status}}
	I0815 01:26:08.313136  798414 out.go:177] * Verifying Kubernetes components...
	I0815 01:26:08.314916  798414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:26:08.383448  798414 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:26:08.383453  798414 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:26:08.386410  798414 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:26:08.386445  798414 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:26:08.386523  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:08.386647  798414 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:26:08.386661  798414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:26:08.386708  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:08.387000  798414 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0815 01:26:08.388872  798414 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0815 01:26:08.390315  798414 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0815 01:26:08.390333  798414 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0815 01:26:08.390396  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:08.424897  798414 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-145466"
	W0815 01:26:08.424919  798414 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:26:08.424944  798414 host.go:66] Checking if "old-k8s-version-145466" exists ...
	I0815 01:26:08.425377  798414 cli_runner.go:164] Run: docker container inspect old-k8s-version-145466 --format={{.State.Status}}
	I0815 01:26:08.459227  798414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:26:08.469264  798414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/old-k8s-version-145466/id_rsa Username:docker}
	I0815 01:26:08.476319  798414 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-145466" to be "Ready" ...
	I0815 01:26:08.481878  798414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/old-k8s-version-145466/id_rsa Username:docker}
	I0815 01:26:08.495313  798414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/old-k8s-version-145466/id_rsa Username:docker}
	I0815 01:26:08.496789  798414 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:26:08.496808  798414 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:26:08.496869  798414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145466
	I0815 01:26:08.565241  798414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33805 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/old-k8s-version-145466/id_rsa Username:docker}
	I0815 01:26:08.643350  798414 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0815 01:26:08.643447  798414 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0815 01:26:08.681989  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:26:08.706315  798414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:26:08.706401  798414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:26:08.754136  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:26:08.768309  798414 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0815 01:26:08.768333  798414 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0815 01:26:08.820894  798414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:26:08.820920  798414 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:26:08.845892  798414 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0815 01:26:08.845992  798414 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0815 01:26:08.918811  798414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:26:08.918895  798414 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:26:08.925269  798414 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0815 01:26:08.925338  798414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0815 01:26:09.014629  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:26:09.024017  798414 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0815 01:26:09.024051  798414 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0815 01:26:09.091927  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.091965  798414 retry.go:31] will retry after 129.132835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 01:26:09.100042  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.100072  798414 retry.go:31] will retry after 312.027235ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.110841  798414 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0815 01:26:09.110865  798414 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0815 01:26:09.153114  798414 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0815 01:26:09.153138  798414 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0815 01:26:09.206342  798414 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0815 01:26:09.206362  798414 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0815 01:26:09.221594  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0815 01:26:09.251106  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.251196  798414 retry.go:31] will retry after 316.401978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.251701  798414 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 01:26:09.251718  798414 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0815 01:26:09.318942  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0815 01:26:09.370585  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.370622  798414 retry.go:31] will retry after 431.069804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.412926  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0815 01:26:09.457810  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.457843  798414 retry.go:31] will retry after 308.596157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 01:26:09.528982  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.529026  798414 retry.go:31] will retry after 344.892212ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.568485  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0815 01:26:09.675741  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.675777  798414 retry.go:31] will retry after 520.552668ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.767028  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 01:26:09.802796  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:26:09.874117  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0815 01:26:09.897072  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:09.897108  798414 retry.go:31] will retry after 230.591454ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 01:26:10.022743  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:10.022783  798414 retry.go:31] will retry after 816.130672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 01:26:10.073281  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:10.073317  798414 retry.go:31] will retry after 533.635614ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:10.128687  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 01:26:10.197190  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0815 01:26:10.234725  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:10.234761  798414 retry.go:31] will retry after 320.710466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 01:26:10.328544  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:10.328581  798414 retry.go:31] will retry after 834.039661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:10.477225  798414 node_ready.go:53] error getting node "old-k8s-version-145466": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-145466": dial tcp 192.168.85.2:8443: connect: connection refused
	I0815 01:26:10.556633  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 01:26:10.607167  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0815 01:26:10.707780  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:10.707879  798414 retry.go:31] will retry after 687.49878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 01:26:10.773917  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:10.773999  798414 retry.go:31] will retry after 968.333544ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:10.839108  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0815 01:26:10.940978  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:10.941069  798414 retry.go:31] will retry after 528.236798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:11.162840  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0815 01:26:11.286411  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:11.286443  798414 retry.go:31] will retry after 658.949406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:11.395918  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 01:26:11.470312  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0815 01:26:11.503442  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:11.503473  798414 retry.go:31] will retry after 925.477913ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 01:26:11.585707  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:11.585737  798414 retry.go:31] will retry after 856.324307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:11.743233  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0815 01:26:11.852010  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:11.852048  798414 retry.go:31] will retry after 1.661820104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:11.946312  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0815 01:26:12.058959  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:12.058992  798414 retry.go:31] will retry after 1.138577647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:12.429690  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 01:26:12.443030  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:26:12.477590  798414 node_ready.go:53] error getting node "old-k8s-version-145466": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-145466": dial tcp 192.168.85.2:8443: connect: connection refused
	W0815 01:26:12.573657  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:12.573692  798414 retry.go:31] will retry after 2.333911479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 01:26:12.632843  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:12.632886  798414 retry.go:31] will retry after 1.631093453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:13.198272  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0815 01:26:13.312854  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:13.312884  798414 retry.go:31] will retry after 1.512618801s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:13.514843  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0815 01:26:13.609863  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:13.609891  798414 retry.go:31] will retry after 2.074716504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:14.264244  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0815 01:26:14.353886  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:14.353922  798414 retry.go:31] will retry after 2.26243475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:14.477643  798414 node_ready.go:53] error getting node "old-k8s-version-145466": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-145466": dial tcp 192.168.85.2:8443: connect: connection refused
	I0815 01:26:14.826520  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:26:14.908050  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0815 01:26:14.921327  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:14.921357  798414 retry.go:31] will retry after 4.245821324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 01:26:15.013007  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:15.013047  798414 retry.go:31] will retry after 1.791379232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:15.685026  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0815 01:26:15.799855  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:15.799895  798414 retry.go:31] will retry after 3.485169679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:16.477700  798414 node_ready.go:53] error getting node "old-k8s-version-145466": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-145466": dial tcp 192.168.85.2:8443: connect: connection refused
	I0815 01:26:16.617055  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:26:16.805661  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0815 01:26:16.821914  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:16.821949  798414 retry.go:31] will retry after 3.027880504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 01:26:16.937177  798414 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:16.937214  798414 retry.go:31] will retry after 5.349298615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 01:26:19.168288  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:26:19.285652  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:26:19.850891  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:26:22.287074  798414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 01:26:28.217889  798414 node_ready.go:49] node "old-k8s-version-145466" has status "Ready":"True"
	I0815 01:26:28.217914  798414 node_ready.go:38] duration metric: took 19.741560308s for node "old-k8s-version-145466" to be "Ready" ...
	I0815 01:26:28.217924  798414 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:26:28.672034  798414 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-sc7dc" in "kube-system" namespace to be "Ready" ...
	I0815 01:26:29.200249  798414 pod_ready.go:92] pod "coredns-74ff55c5b-sc7dc" in "kube-system" namespace has status "Ready":"True"
	I0815 01:26:29.200326  798414 pod_ready.go:81] duration metric: took 528.215279ms for pod "coredns-74ff55c5b-sc7dc" in "kube-system" namespace to be "Ready" ...
	I0815 01:26:29.200354  798414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-145466" in "kube-system" namespace to be "Ready" ...
	I0815 01:26:29.309603  798414 pod_ready.go:92] pod "etcd-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"True"
	I0815 01:26:29.309677  798414 pod_ready.go:81] duration metric: took 109.299774ms for pod "etcd-old-k8s-version-145466" in "kube-system" namespace to be "Ready" ...
	I0815 01:26:29.309707  798414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-145466" in "kube-system" namespace to be "Ready" ...
	I0815 01:26:31.339217  798414 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:26:32.058244  798414 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (12.772541642s)
	I0815 01:26:32.058489  798414 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.890159568s)
	I0815 01:26:32.058511  798414 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-145466"
	I0815 01:26:32.058680  798414 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.207758269s)
	I0815 01:26:32.058764  798414 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.771660472s)
	I0815 01:26:32.061346  798414 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-145466 addons enable metrics-server
	
	I0815 01:26:32.069407  798414 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0815 01:26:32.071322  798414 addons.go:510] duration metric: took 23.765221635s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0815 01:26:33.816154  798414 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:26:35.816278  798414 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:26:37.817001  798414 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:26:39.316924  798414 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"True"
	I0815 01:26:39.317000  798414 pod_ready.go:81] duration metric: took 10.007268856s for pod "kube-apiserver-old-k8s-version-145466" in "kube-system" namespace to be "Ready" ...
	I0815 01:26:39.317028  798414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace to be "Ready" ...
	I0815 01:26:41.325804  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:26:43.823966  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:26:46.323804  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:26:48.823753  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:26:51.329608  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:26:53.824163  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:26:55.825107  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:26:58.325731  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:00.337533  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:02.828916  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:05.323897  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:07.325034  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:09.824268  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:12.325043  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:14.824140  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:16.824555  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:19.336792  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:21.822762  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:23.823506  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:25.824875  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:28.323355  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:30.326045  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:32.326505  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:34.824704  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:37.324127  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:39.325365  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:41.839092  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:44.325718  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:46.824159  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:48.323803  798414 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:48.323828  798414 pod_ready.go:81] duration metric: took 1m9.006777258s for pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:48.323840  798414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hdj25" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:48.328838  798414 pod_ready.go:92] pod "kube-proxy-hdj25" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:48.328866  798414 pod_ready.go:81] duration metric: took 4.98266ms for pod "kube-proxy-hdj25" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:48.328878  798414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-145466" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:50.337069  798414 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:52.836311  798414 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:54.334906  798414 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:54.334933  798414 pod_ready.go:81] duration metric: took 6.006014766s for pod "kube-scheduler-old-k8s-version-145466" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:54.334945  798414 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:56.341453  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:58.841137  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:01.341468  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:03.342277  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:05.841177  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:07.841420  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:09.841515  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:11.841903  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:14.341251  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:16.341817  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:18.905274  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:21.340938  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:23.343888  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:25.840886  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:27.841307  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:29.842382  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:31.845262  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:34.340679  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:36.340926  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:38.348235  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:40.841788  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:43.341955  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:45.343754  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:47.841262  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:50.341583  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:52.342360  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:54.842055  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:57.341767  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:59.842110  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:02.341631  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:04.841905  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:07.341412  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:09.361881  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:11.840761  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:13.845821  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:16.341158  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:18.341582  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:20.342140  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:22.842042  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:24.842165  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:27.341302  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:29.844818  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:32.340995  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:34.341661  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:36.841766  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:39.342526  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:41.841158  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:43.842484  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:46.341306  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:48.840994  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.842218  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:53.341747  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:55.841257  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:58.341465  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:00.380117  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:02.840765  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.341086  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:07.341145  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.341333  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.342252  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:13.841939  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:16.345255  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.843266  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:21.341071  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.341252  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:25.341574  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:27.842623  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:30.341285  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.342057  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:34.841107  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:36.841164  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:38.841855  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:41.340950  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:43.341511  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:45.342593  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.840983  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:49.842312  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:52.341179  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:54.841042  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:56.841647  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:59.340948  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:01.840886  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:03.841711  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:06.341111  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:08.845348  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.340908  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:13.340956  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.341002  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:17.342150  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:19.841230  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.841934  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:24.341985  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:26.841205  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.841630  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:31.341383  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:33.841991  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.341927  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:38.840908  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:41.340767  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:43.341898  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.342464  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:47.842026  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:50.341226  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:52.841594  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:54.341770  798414 pod_ready.go:81] duration metric: took 4m0.006810385s for pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace to be "Ready" ...
	E0815 01:31:54.341793  798414 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 01:31:54.341801  798414 pod_ready.go:38] duration metric: took 5m26.123866447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:31:54.341815  798414 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:31:54.341844  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:54.341906  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:54.389104  798414 cri.go:89] found id: "5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645"
	I0815 01:31:54.389129  798414 cri.go:89] found id: "d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f"
	I0815 01:31:54.389135  798414 cri.go:89] found id: ""
	I0815 01:31:54.389142  798414 logs.go:276] 2 containers: [5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645 d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f]
	I0815 01:31:54.389202  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.392818  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.396470  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0815 01:31:54.396573  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:54.440719  798414 cri.go:89] found id: "1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43"
	I0815 01:31:54.440738  798414 cri.go:89] found id: "149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d"
	I0815 01:31:54.440744  798414 cri.go:89] found id: ""
	I0815 01:31:54.440751  798414 logs.go:276] 2 containers: [1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43 149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d]
	I0815 01:31:54.440852  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.444415  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.447805  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0815 01:31:54.447916  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:54.487638  798414 cri.go:89] found id: "08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9"
	I0815 01:31:54.487661  798414 cri.go:89] found id: "56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4"
	I0815 01:31:54.487666  798414 cri.go:89] found id: ""
	I0815 01:31:54.487673  798414 logs.go:276] 2 containers: [08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9 56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4]
	I0815 01:31:54.487731  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.491365  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.494774  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:54.494846  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:54.556835  798414 cri.go:89] found id: "ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8"
	I0815 01:31:54.556860  798414 cri.go:89] found id: "f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d"
	I0815 01:31:54.556865  798414 cri.go:89] found id: ""
	I0815 01:31:54.556873  798414 logs.go:276] 2 containers: [ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8 f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d]
	I0815 01:31:54.556935  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.560493  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.563824  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:54.563938  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:54.604088  798414 cri.go:89] found id: "65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418"
	I0815 01:31:54.604161  798414 cri.go:89] found id: "8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5"
	I0815 01:31:54.604173  798414 cri.go:89] found id: ""
	I0815 01:31:54.604181  798414 logs.go:276] 2 containers: [65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418 8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5]
	I0815 01:31:54.604282  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.607763  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.611024  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:54.611091  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:54.649541  798414 cri.go:89] found id: "30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd"
	I0815 01:31:54.649565  798414 cri.go:89] found id: "d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f"
	I0815 01:31:54.649570  798414 cri.go:89] found id: ""
	I0815 01:31:54.649577  798414 logs.go:276] 2 containers: [30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f]
	I0815 01:31:54.649635  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.653540  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.656990  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:54.657061  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:54.705048  798414 cri.go:89] found id: "6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b"
	I0815 01:31:54.705073  798414 cri.go:89] found id: "8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1"
	I0815 01:31:54.705078  798414 cri.go:89] found id: ""
	I0815 01:31:54.705086  798414 logs.go:276] 2 containers: [6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b 8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1]
	I0815 01:31:54.705142  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.708794  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.712050  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:54.712119  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:54.761867  798414 cri.go:89] found id: "666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6"
	I0815 01:31:54.761892  798414 cri.go:89] found id: ""
	I0815 01:31:54.761901  798414 logs.go:276] 1 containers: [666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6]
	I0815 01:31:54.761956  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.765515  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:31:54.765592  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:31:54.814259  798414 cri.go:89] found id: "fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3"
	I0815 01:31:54.814287  798414 cri.go:89] found id: "6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414"
	I0815 01:31:54.814292  798414 cri.go:89] found id: ""
	I0815 01:31:54.814299  798414 logs.go:276] 2 containers: [fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3 6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414]
	I0815 01:31:54.814357  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.818624  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.822284  798414 logs.go:123] Gathering logs for kube-scheduler [f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d] ...
	I0815 01:31:54.822315  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d"
	I0815 01:31:54.866675  798414 logs.go:123] Gathering logs for kindnet [6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b] ...
	I0815 01:31:54.866707  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b"
	I0815 01:31:54.927529  798414 logs.go:123] Gathering logs for etcd [1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43] ...
	I0815 01:31:54.927578  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43"
	I0815 01:31:54.977111  798414 logs.go:123] Gathering logs for coredns [08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9] ...
	I0815 01:31:54.977140  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9"
	I0815 01:31:55.029599  798414 logs.go:123] Gathering logs for kube-proxy [65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418] ...
	I0815 01:31:55.029630  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418"
	I0815 01:31:55.072848  798414 logs.go:123] Gathering logs for kube-controller-manager [30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd] ...
	I0815 01:31:55.072876  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd"
	I0815 01:31:55.141657  798414 logs.go:123] Gathering logs for kube-controller-manager [d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f] ...
	I0815 01:31:55.141692  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f"
	I0815 01:31:55.207613  798414 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:55.207649  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 01:31:55.272355  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.206496     664 reflector.go:138] object-"kube-system"/"coredns-token-hqq2w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-hqq2w" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.272573  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.206838     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.272786  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207185     664 reflector.go:138] object-"kube-system"/"kindnet-token-pflml": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-pflml" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.273001  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207498     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-rl52n": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-rl52n" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.273206  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207678     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.273517  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.264938     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-k6pcd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-k6pcd" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.273725  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.114283     664 reflector.go:138] object-"default"/"default-token-j4wgn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-j4wgn" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.276976  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:29 old-k8s-version-145466 kubelet[664]: E0815 01:26:29.390721     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:31:55.278325  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:29 old-k8s-version-145466 kubelet[664]: E0815 01:26:29.415771     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.281690  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:42 old-k8s-version-145466 kubelet[664]: E0815 01:26:42.146871     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:31:55.282438  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:49 old-k8s-version-145466 kubelet[664]: E0815 01:26:49.166497     664 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-fksp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-fksp6" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.283970  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:57 old-k8s-version-145466 kubelet[664]: E0815 01:26:57.141841     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.284430  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:00 old-k8s-version-145466 kubelet[664]: E0815 01:27:00.751687     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.284885  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:01 old-k8s-version-145466 kubelet[664]: E0815 01:27:01.756479     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.285325  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:02 old-k8s-version-145466 kubelet[664]: E0815 01:27:02.761739     664 pod_workers.go:191] Error syncing pod d7e84c38-c90e-427c-bfc5-45adf788d6fe ("storage-provisioner_kube-system(d7e84c38-c90e-427c-bfc5-45adf788d6fe)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d7e84c38-c90e-427c-bfc5-45adf788d6fe)"
	W0815 01:31:55.285983  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:08 old-k8s-version-145466 kubelet[664]: E0815 01:27:08.528394     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.288438  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:11 old-k8s-version-145466 kubelet[664]: E0815 01:27:11.140512     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:31:55.289156  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:20 old-k8s-version-145466 kubelet[664]: E0815 01:27:20.809846     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.289339  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:24 old-k8s-version-145466 kubelet[664]: E0815 01:27:24.133897     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.289665  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:28 old-k8s-version-145466 kubelet[664]: E0815 01:27:28.528406     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.289850  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:38 old-k8s-version-145466 kubelet[664]: E0815 01:27:38.186907     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.290435  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:42 old-k8s-version-145466 kubelet[664]: E0815 01:27:42.877822     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.290766  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:48 old-k8s-version-145466 kubelet[664]: E0815 01:27:48.529042     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.293220  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:53 old-k8s-version-145466 kubelet[664]: E0815 01:27:53.141943     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:31:55.293549  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:02 old-k8s-version-145466 kubelet[664]: E0815 01:28:02.132198     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.293733  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:04 old-k8s-version-145466 kubelet[664]: E0815 01:28:04.133346     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.293928  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:16 old-k8s-version-145466 kubelet[664]: E0815 01:28:16.144812     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.294252  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:17 old-k8s-version-145466 kubelet[664]: E0815 01:28:17.132160     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.294839  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:29 old-k8s-version-145466 kubelet[664]: E0815 01:28:29.028736     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.295025  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:31 old-k8s-version-145466 kubelet[664]: E0815 01:28:31.132458     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.295352  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:38 old-k8s-version-145466 kubelet[664]: E0815 01:28:38.528400     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.295538  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:44 old-k8s-version-145466 kubelet[664]: E0815 01:28:44.132479     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.295872  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:51 old-k8s-version-145466 kubelet[664]: E0815 01:28:51.133407     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.296062  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:56 old-k8s-version-145466 kubelet[664]: E0815 01:28:56.135291     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.296387  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:04 old-k8s-version-145466 kubelet[664]: E0815 01:29:04.132344     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.296569  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:10 old-k8s-version-145466 kubelet[664]: E0815 01:29:10.132536     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.296893  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:17 old-k8s-version-145466 kubelet[664]: E0815 01:29:17.132162     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.299321  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:23 old-k8s-version-145466 kubelet[664]: E0815 01:29:23.139752     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:31:55.299673  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:28 old-k8s-version-145466 kubelet[664]: E0815 01:29:28.139064     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.299863  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:34 old-k8s-version-145466 kubelet[664]: E0815 01:29:34.132664     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.300190  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:40 old-k8s-version-145466 kubelet[664]: E0815 01:29:40.133096     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.300379  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:49 old-k8s-version-145466 kubelet[664]: E0815 01:29:49.133068     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.300968  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:51 old-k8s-version-145466 kubelet[664]: E0815 01:29:51.281771     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.301293  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:58 old-k8s-version-145466 kubelet[664]: E0815 01:29:58.528848     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.301485  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:00 old-k8s-version-145466 kubelet[664]: E0815 01:30:00.160446     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.301814  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:10 old-k8s-version-145466 kubelet[664]: E0815 01:30:10.135730     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.301998  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:11 old-k8s-version-145466 kubelet[664]: E0815 01:30:11.132453     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.302337  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:22 old-k8s-version-145466 kubelet[664]: E0815 01:30:22.132670     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.302530  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:26 old-k8s-version-145466 kubelet[664]: E0815 01:30:26.135386     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.302859  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:33 old-k8s-version-145466 kubelet[664]: E0815 01:30:33.132520     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.303041  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:41 old-k8s-version-145466 kubelet[664]: E0815 01:30:41.132604     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.303369  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:45 old-k8s-version-145466 kubelet[664]: E0815 01:30:45.132399     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.303551  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:54 old-k8s-version-145466 kubelet[664]: E0815 01:30:54.132932     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.303895  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:56 old-k8s-version-145466 kubelet[664]: E0815 01:30:56.132938     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.304080  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:06 old-k8s-version-145466 kubelet[664]: E0815 01:31:06.132954     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.304404  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:11 old-k8s-version-145466 kubelet[664]: E0815 01:31:11.132193     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.304589  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:20 old-k8s-version-145466 kubelet[664]: E0815 01:31:20.132734     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.304913  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:23 old-k8s-version-145466 kubelet[664]: E0815 01:31:23.132188     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.305096  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:31 old-k8s-version-145466 kubelet[664]: E0815 01:31:31.132519     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.305420  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:38 old-k8s-version-145466 kubelet[664]: E0815 01:31:38.133982     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.305603  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:42 old-k8s-version-145466 kubelet[664]: E0815 01:31:42.132790     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.305928  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.132720     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.306109  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.133955     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0815 01:31:55.306119  798414 logs.go:123] Gathering logs for kube-apiserver [5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645] ...
	I0815 01:31:55.306135  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645"
	I0815 01:31:55.368137  798414 logs.go:123] Gathering logs for kindnet [8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1] ...
	I0815 01:31:55.368175  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1"
	I0815 01:31:55.418716  798414 logs.go:123] Gathering logs for kubernetes-dashboard [666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6] ...
	I0815 01:31:55.418749  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6"
	I0815 01:31:55.463235  798414 logs.go:123] Gathering logs for storage-provisioner [fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3] ...
	I0815 01:31:55.463266  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3"
	I0815 01:31:55.524586  798414 logs.go:123] Gathering logs for container status ...
	I0815 01:31:55.524614  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:55.586139  798414 logs.go:123] Gathering logs for etcd [149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d] ...
	I0815 01:31:55.586167  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d"
	I0815 01:31:55.628448  798414 logs.go:123] Gathering logs for kube-scheduler [ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8] ...
	I0815 01:31:55.628476  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8"
	I0815 01:31:55.667804  798414 logs.go:123] Gathering logs for kube-apiserver [d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f] ...
	I0815 01:31:55.667828  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f"
	I0815 01:31:55.730471  798414 logs.go:123] Gathering logs for coredns [56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4] ...
	I0815 01:31:55.730513  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4"
	I0815 01:31:55.779800  798414 logs.go:123] Gathering logs for kube-proxy [8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5] ...
	I0815 01:31:55.779833  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5"
	I0815 01:31:55.823656  798414 logs.go:123] Gathering logs for storage-provisioner [6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414] ...
	I0815 01:31:55.823733  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414"
	I0815 01:31:55.861500  798414 logs.go:123] Gathering logs for containerd ...
	I0815 01:31:55.861528  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0815 01:31:55.921791  798414 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:55.921824  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:55.943979  798414 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:55.944021  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:31:56.104386  798414 out.go:304] Setting ErrFile to fd 2...
	I0815 01:31:56.104411  798414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 01:31:56.104602  798414 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0815 01:31:56.104620  798414 out.go:239]   Aug 15 01:31:31 old-k8s-version-145466 kubelet[664]: E0815 01:31:31.132519     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 15 01:31:31 old-k8s-version-145466 kubelet[664]: E0815 01:31:31.132519     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:56.104640  798414 out.go:239]   Aug 15 01:31:38 old-k8s-version-145466 kubelet[664]: E0815 01:31:38.133982     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	  Aug 15 01:31:38 old-k8s-version-145466 kubelet[664]: E0815 01:31:38.133982     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:56.104655  798414 out.go:239]   Aug 15 01:31:42 old-k8s-version-145466 kubelet[664]: E0815 01:31:42.132790     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 15 01:31:42 old-k8s-version-145466 kubelet[664]: E0815 01:31:42.132790     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:56.104675  798414 out.go:239]   Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.132720     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	  Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.132720     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:56.104690  798414 out.go:239]   Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.133955     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.133955     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0815 01:31:56.104697  798414 out.go:304] Setting ErrFile to fd 2...
	I0815 01:31:56.104709  798414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:32:06.105579  798414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:06.118752  798414 api_server.go:72] duration metric: took 5m57.813041637s to wait for apiserver process to appear ...
	I0815 01:32:06.118774  798414 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:32:06.118810  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:06.118865  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:06.165617  798414 cri.go:89] found id: "5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645"
	I0815 01:32:06.165640  798414 cri.go:89] found id: "d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f"
	I0815 01:32:06.165646  798414 cri.go:89] found id: ""
	I0815 01:32:06.165653  798414 logs.go:276] 2 containers: [5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645 d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f]
	I0815 01:32:06.165708  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.169565  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.173032  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0815 01:32:06.173102  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:06.214710  798414 cri.go:89] found id: "1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43"
	I0815 01:32:06.214731  798414 cri.go:89] found id: "149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d"
	I0815 01:32:06.214735  798414 cri.go:89] found id: ""
	I0815 01:32:06.214743  798414 logs.go:276] 2 containers: [1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43 149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d]
	I0815 01:32:06.214801  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.219522  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.223511  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0815 01:32:06.223588  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:06.275179  798414 cri.go:89] found id: "08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9"
	I0815 01:32:06.275200  798414 cri.go:89] found id: "56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4"
	I0815 01:32:06.275205  798414 cri.go:89] found id: ""
	I0815 01:32:06.275212  798414 logs.go:276] 2 containers: [08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9 56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4]
	I0815 01:32:06.275271  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.279316  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.283547  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:06.283635  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:06.325027  798414 cri.go:89] found id: "ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8"
	I0815 01:32:06.325047  798414 cri.go:89] found id: "f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d"
	I0815 01:32:06.325051  798414 cri.go:89] found id: ""
	I0815 01:32:06.325059  798414 logs.go:276] 2 containers: [ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8 f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d]
	I0815 01:32:06.325114  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.328921  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.332566  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:06.332658  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:06.374132  798414 cri.go:89] found id: "65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418"
	I0815 01:32:06.374165  798414 cri.go:89] found id: "8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5"
	I0815 01:32:06.374173  798414 cri.go:89] found id: ""
	I0815 01:32:06.374184  798414 logs.go:276] 2 containers: [65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418 8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5]
	I0815 01:32:06.374246  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.379778  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.383769  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:06.383887  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:06.424513  798414 cri.go:89] found id: "30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd"
	I0815 01:32:06.424585  798414 cri.go:89] found id: "d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f"
	I0815 01:32:06.424598  798414 cri.go:89] found id: ""
	I0815 01:32:06.424607  798414 logs.go:276] 2 containers: [30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f]
	I0815 01:32:06.424671  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.428875  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.433119  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:06.433275  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:06.478733  798414 cri.go:89] found id: "6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b"
	I0815 01:32:06.478760  798414 cri.go:89] found id: "8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1"
	I0815 01:32:06.478767  798414 cri.go:89] found id: ""
	I0815 01:32:06.478775  798414 logs.go:276] 2 containers: [6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b 8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1]
	I0815 01:32:06.478845  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.482927  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.486698  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:06.486788  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:06.536723  798414 cri.go:89] found id: "666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6"
	I0815 01:32:06.536748  798414 cri.go:89] found id: ""
	I0815 01:32:06.536757  798414 logs.go:276] 1 containers: [666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6]
	I0815 01:32:06.536832  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.540620  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:32:06.540726  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:32:06.577792  798414 cri.go:89] found id: "fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3"
	I0815 01:32:06.577814  798414 cri.go:89] found id: "6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414"
	I0815 01:32:06.577820  798414 cri.go:89] found id: ""
	I0815 01:32:06.577827  798414 logs.go:276] 2 containers: [fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3 6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414]
	I0815 01:32:06.577881  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.581525  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.585154  798414 logs.go:123] Gathering logs for kube-proxy [65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418] ...
	I0815 01:32:06.585180  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418"
	I0815 01:32:06.629579  798414 logs.go:123] Gathering logs for kubernetes-dashboard [666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6] ...
	I0815 01:32:06.629604  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6"
	I0815 01:32:06.671944  798414 logs.go:123] Gathering logs for storage-provisioner [fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3] ...
	I0815 01:32:06.671982  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3"
	I0815 01:32:06.713621  798414 logs.go:123] Gathering logs for container status ...
	I0815 01:32:06.713664  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:06.771192  798414 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:06.771223  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:32:06.916807  798414 logs.go:123] Gathering logs for coredns [56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4] ...
	I0815 01:32:06.916881  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4"
	I0815 01:32:06.967152  798414 logs.go:123] Gathering logs for kube-scheduler [ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8] ...
	I0815 01:32:06.967233  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8"
	I0815 01:32:07.009506  798414 logs.go:123] Gathering logs for kube-proxy [8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5] ...
	I0815 01:32:07.009537  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5"
	I0815 01:32:07.049321  798414 logs.go:123] Gathering logs for containerd ...
	I0815 01:32:07.049360  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0815 01:32:07.113454  798414 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:07.113494  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 01:32:07.177310  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.206496     664 reflector.go:138] object-"kube-system"/"coredns-token-hqq2w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-hqq2w" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.177533  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.206838     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.177748  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207185     664 reflector.go:138] object-"kube-system"/"kindnet-token-pflml": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-pflml" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.177967  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207498     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-rl52n": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-rl52n" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.178175  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207678     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.178488  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.264938     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-k6pcd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-k6pcd" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.178700  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.114283     664 reflector.go:138] object-"default"/"default-token-j4wgn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-j4wgn" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.181924  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:29 old-k8s-version-145466 kubelet[664]: E0815 01:26:29.390721     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.183276  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:29 old-k8s-version-145466 kubelet[664]: E0815 01:26:29.415771     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.186634  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:42 old-k8s-version-145466 kubelet[664]: E0815 01:26:42.146871     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.187378  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:49 old-k8s-version-145466 kubelet[664]: E0815 01:26:49.166497     664 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-fksp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-fksp6" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.188913  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:57 old-k8s-version-145466 kubelet[664]: E0815 01:26:57.141841     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.189372  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:00 old-k8s-version-145466 kubelet[664]: E0815 01:27:00.751687     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.189828  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:01 old-k8s-version-145466 kubelet[664]: E0815 01:27:01.756479     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.190265  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:02 old-k8s-version-145466 kubelet[664]: E0815 01:27:02.761739     664 pod_workers.go:191] Error syncing pod d7e84c38-c90e-427c-bfc5-45adf788d6fe ("storage-provisioner_kube-system(d7e84c38-c90e-427c-bfc5-45adf788d6fe)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d7e84c38-c90e-427c-bfc5-45adf788d6fe)"
	W0815 01:32:07.190931  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:08 old-k8s-version-145466 kubelet[664]: E0815 01:27:08.528394     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.193356  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:11 old-k8s-version-145466 kubelet[664]: E0815 01:27:11.140512     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.194070  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:20 old-k8s-version-145466 kubelet[664]: E0815 01:27:20.809846     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.194257  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:24 old-k8s-version-145466 kubelet[664]: E0815 01:27:24.133897     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.194585  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:28 old-k8s-version-145466 kubelet[664]: E0815 01:27:28.528406     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.194768  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:38 old-k8s-version-145466 kubelet[664]: E0815 01:27:38.186907     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.195356  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:42 old-k8s-version-145466 kubelet[664]: E0815 01:27:42.877822     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.195681  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:48 old-k8s-version-145466 kubelet[664]: E0815 01:27:48.529042     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.198128  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:53 old-k8s-version-145466 kubelet[664]: E0815 01:27:53.141943     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.198456  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:02 old-k8s-version-145466 kubelet[664]: E0815 01:28:02.132198     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.198668  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:04 old-k8s-version-145466 kubelet[664]: E0815 01:28:04.133346     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.198852  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:16 old-k8s-version-145466 kubelet[664]: E0815 01:28:16.144812     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.199177  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:17 old-k8s-version-145466 kubelet[664]: E0815 01:28:17.132160     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.199766  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:29 old-k8s-version-145466 kubelet[664]: E0815 01:28:29.028736     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.199975  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:31 old-k8s-version-145466 kubelet[664]: E0815 01:28:31.132458     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.200308  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:38 old-k8s-version-145466 kubelet[664]: E0815 01:28:38.528400     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.200493  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:44 old-k8s-version-145466 kubelet[664]: E0815 01:28:44.132479     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.200817  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:51 old-k8s-version-145466 kubelet[664]: E0815 01:28:51.133407     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.201005  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:56 old-k8s-version-145466 kubelet[664]: E0815 01:28:56.135291     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.201329  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:04 old-k8s-version-145466 kubelet[664]: E0815 01:29:04.132344     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.201512  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:10 old-k8s-version-145466 kubelet[664]: E0815 01:29:10.132536     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.201837  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:17 old-k8s-version-145466 kubelet[664]: E0815 01:29:17.132162     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.204300  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:23 old-k8s-version-145466 kubelet[664]: E0815 01:29:23.139752     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.204628  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:28 old-k8s-version-145466 kubelet[664]: E0815 01:29:28.139064     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.204812  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:34 old-k8s-version-145466 kubelet[664]: E0815 01:29:34.132664     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.205154  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:40 old-k8s-version-145466 kubelet[664]: E0815 01:29:40.133096     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.205337  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:49 old-k8s-version-145466 kubelet[664]: E0815 01:29:49.133068     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.205925  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:51 old-k8s-version-145466 kubelet[664]: E0815 01:29:51.281771     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.206255  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:58 old-k8s-version-145466 kubelet[664]: E0815 01:29:58.528848     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.206438  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:00 old-k8s-version-145466 kubelet[664]: E0815 01:30:00.160446     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.206767  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:10 old-k8s-version-145466 kubelet[664]: E0815 01:30:10.135730     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.206949  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:11 old-k8s-version-145466 kubelet[664]: E0815 01:30:11.132453     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.207272  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:22 old-k8s-version-145466 kubelet[664]: E0815 01:30:22.132670     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.207454  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:26 old-k8s-version-145466 kubelet[664]: E0815 01:30:26.135386     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.207781  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:33 old-k8s-version-145466 kubelet[664]: E0815 01:30:33.132520     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.207971  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:41 old-k8s-version-145466 kubelet[664]: E0815 01:30:41.132604     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.208297  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:45 old-k8s-version-145466 kubelet[664]: E0815 01:30:45.132399     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.208479  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:54 old-k8s-version-145466 kubelet[664]: E0815 01:30:54.132932     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.208803  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:56 old-k8s-version-145466 kubelet[664]: E0815 01:30:56.132938     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.208985  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:06 old-k8s-version-145466 kubelet[664]: E0815 01:31:06.132954     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.209308  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:11 old-k8s-version-145466 kubelet[664]: E0815 01:31:11.132193     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.209490  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:20 old-k8s-version-145466 kubelet[664]: E0815 01:31:20.132734     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.209814  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:23 old-k8s-version-145466 kubelet[664]: E0815 01:31:23.132188     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.209997  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:31 old-k8s-version-145466 kubelet[664]: E0815 01:31:31.132519     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.210321  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:38 old-k8s-version-145466 kubelet[664]: E0815 01:31:38.133982     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.210506  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:42 old-k8s-version-145466 kubelet[664]: E0815 01:31:42.132790     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.210833  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.132720     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.211019  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.133955     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.213451  798414 logs.go:138] Found kubelet problem: Aug 15 01:32:05 old-k8s-version-145466 kubelet[664]: E0815 01:32:05.164689     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.213780  798414 logs.go:138] Found kubelet problem: Aug 15 01:32:06 old-k8s-version-145466 kubelet[664]: E0815 01:32:06.132574     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	I0815 01:32:07.213790  798414 logs.go:123] Gathering logs for etcd [149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d] ...
	I0815 01:32:07.213804  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d"
	I0815 01:32:07.268209  798414 logs.go:123] Gathering logs for coredns [08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9] ...
	I0815 01:32:07.268240  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9"
	I0815 01:32:07.312267  798414 logs.go:123] Gathering logs for kube-scheduler [f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d] ...
	I0815 01:32:07.312297  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d"
	I0815 01:32:07.353856  798414 logs.go:123] Gathering logs for kube-controller-manager [d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f] ...
	I0815 01:32:07.353886  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f"
	I0815 01:32:07.412945  798414 logs.go:123] Gathering logs for kindnet [6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b] ...
	I0815 01:32:07.412980  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b"
	I0815 01:32:07.480680  798414 logs.go:123] Gathering logs for etcd [1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43] ...
	I0815 01:32:07.480758  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43"
	I0815 01:32:07.557927  798414 logs.go:123] Gathering logs for kube-controller-manager [30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd] ...
	I0815 01:32:07.557960  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd"
	I0815 01:32:07.614762  798414 logs.go:123] Gathering logs for kindnet [8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1] ...
	I0815 01:32:07.614799  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1"
	I0815 01:32:07.664776  798414 logs.go:123] Gathering logs for storage-provisioner [6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414] ...
	I0815 01:32:07.664822  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414"
	I0815 01:32:07.703095  798414 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:07.703130  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:07.722145  798414 logs.go:123] Gathering logs for kube-apiserver [5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645] ...
	I0815 01:32:07.722176  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645"
	I0815 01:32:07.778182  798414 logs.go:123] Gathering logs for kube-apiserver [d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f] ...
	I0815 01:32:07.778222  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f"
	I0815 01:32:07.835453  798414 out.go:304] Setting ErrFile to fd 2...
	I0815 01:32:07.835484  798414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 01:32:07.835549  798414 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0815 01:32:07.835563  798414 out.go:239]   Aug 15 01:31:42 old-k8s-version-145466 kubelet[664]: E0815 01:31:42.132790     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 15 01:31:42 old-k8s-version-145466 kubelet[664]: E0815 01:31:42.132790     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.835572  798414 out.go:239]   Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.132720     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	  Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.132720     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.835580  798414 out.go:239]   Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.133955     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.133955     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.835587  798414 out.go:239]   Aug 15 01:32:05 old-k8s-version-145466 kubelet[664]: E0815 01:32:05.164689     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	  Aug 15 01:32:05 old-k8s-version-145466 kubelet[664]: E0815 01:32:05.164689     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.835601  798414 out.go:239]   Aug 15 01:32:06 old-k8s-version-145466 kubelet[664]: E0815 01:32:06.132574     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	  Aug 15 01:32:06 old-k8s-version-145466 kubelet[664]: E0815 01:32:06.132574     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	I0815 01:32:07.835607  798414 out.go:304] Setting ErrFile to fd 2...
	I0815 01:32:07.835621  798414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:32:17.836740  798414 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0815 01:32:17.849082  798414 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0815 01:32:17.851627  798414 out.go:177] 
	W0815 01:32:17.854581  798414 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0815 01:32:17.854634  798414 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0815 01:32:17.854660  798414 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0815 01:32:17.854666  798414 out.go:239] * 
	* 
	W0815 01:32:17.856250  798414 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:32:17.863608  798414 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-145466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-145466
helpers_test.go:235: (dbg) docker inspect old-k8s-version-145466:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5855b1f20999a15761f9b4ac6911a8fc19051ec20e47fbd1c365f58751ca8278",
	        "Created": "2024-08-15T01:23:08.661953003Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 798615,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T01:26:00.297919427Z",
	            "FinishedAt": "2024-08-15T01:25:59.061047456Z"
	        },
	        "Image": "sha256:2b339a1cac4376103734d3066f7ccdf0ac7377a2f8f8d5eb9e81c29f3abcec50",
	        "ResolvConfPath": "/var/lib/docker/containers/5855b1f20999a15761f9b4ac6911a8fc19051ec20e47fbd1c365f58751ca8278/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5855b1f20999a15761f9b4ac6911a8fc19051ec20e47fbd1c365f58751ca8278/hostname",
	        "HostsPath": "/var/lib/docker/containers/5855b1f20999a15761f9b4ac6911a8fc19051ec20e47fbd1c365f58751ca8278/hosts",
	        "LogPath": "/var/lib/docker/containers/5855b1f20999a15761f9b4ac6911a8fc19051ec20e47fbd1c365f58751ca8278/5855b1f20999a15761f9b4ac6911a8fc19051ec20e47fbd1c365f58751ca8278-json.log",
	        "Name": "/old-k8s-version-145466",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-145466:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-145466",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8fa07c694dbba46c257c73caaea090e7f9f192a3c9dd4e4f0bad3bd9474b4a81-init/diff:/var/lib/docker/overlay2/724d641fa67867c1f8a89bb3b136ff9997d84663650d206cbef2b533f5f97838/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fa07c694dbba46c257c73caaea090e7f9f192a3c9dd4e4f0bad3bd9474b4a81/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fa07c694dbba46c257c73caaea090e7f9f192a3c9dd4e4f0bad3bd9474b4a81/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fa07c694dbba46c257c73caaea090e7f9f192a3c9dd4e4f0bad3bd9474b4a81/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-145466",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-145466/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-145466",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-145466",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-145466",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7877f4e547154d3239ffa605a29802b604e1c5b2477aa2ae911f68aa14b9b494",
	            "SandboxKey": "/var/run/docker/netns/7877f4e54715",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-145466": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "4910ff7e81efe82fbc19afffd0db73a9ac7793c2a634229a5e83dd7ddb6a42c4",
	                    "EndpointID": "c0970a44525f4d3927e0189bac16e1fd17777f2b40c205c605203064c591de0f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-145466",
	                        "5855b1f20999"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-145466 -n old-k8s-version-145466
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-145466 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-145466 logs -n 25: (2.030709434s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-404506 sudo find                             | cilium-404506             | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-404506 sudo crio                             | cilium-404506             | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-404506                                       | cilium-404506             | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	| start   | -p force-systemd-env-673385                            | force-systemd-env-673385  | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:22 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-246341                              | force-systemd-flag-246341 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-246341                           | force-systemd-flag-246341 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	| start   | -p cert-expiration-480110                              | cert-expiration-480110    | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:22 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-673385                               | force-systemd-env-673385  | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-673385                            | force-systemd-env-673385  | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	| start   | -p cert-options-476187                                 | cert-options-476187       | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-476187 ssh                                | cert-options-476187       | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-476187 -- sudo                         | cert-options-476187       | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-476187                                 | cert-options-476187       | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:23 UTC |
	| start   | -p old-k8s-version-145466                              | old-k8s-version-145466    | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:25 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-480110                              | cert-expiration-480110    | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC | 15 Aug 24 01:25 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-480110                              | cert-expiration-480110    | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC | 15 Aug 24 01:25 UTC |
	| start   | -p no-preload-891255                                   | no-preload-891255         | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC | 15 Aug 24 01:27 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-145466        | old-k8s-version-145466    | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC | 15 Aug 24 01:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-145466                              | old-k8s-version-145466    | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC | 15 Aug 24 01:25 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-145466             | old-k8s-version-145466    | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC | 15 Aug 24 01:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-145466                              | old-k8s-version-145466    | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-891255             | no-preload-891255         | jenkins | v1.33.1 | 15 Aug 24 01:27 UTC | 15 Aug 24 01:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-891255                                   | no-preload-891255         | jenkins | v1.33.1 | 15 Aug 24 01:27 UTC | 15 Aug 24 01:27 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-891255                  | no-preload-891255         | jenkins | v1.33.1 | 15 Aug 24 01:27 UTC | 15 Aug 24 01:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-891255                                   | no-preload-891255         | jenkins | v1.33.1 | 15 Aug 24 01:27 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:27:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:27:30.217561  803761 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:27:30.217701  803761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:27:30.217713  803761 out.go:304] Setting ErrFile to fd 2...
	I0815 01:27:30.217720  803761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:27:30.217982  803761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	I0815 01:27:30.218444  803761 out.go:298] Setting JSON to false
	I0815 01:27:30.219594  803761 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18573,"bootTime":1723666678,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 01:27:30.219680  803761 start.go:139] virtualization:  
	I0815 01:27:30.223461  803761 out.go:177] * [no-preload-891255] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 01:27:30.225728  803761 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:27:30.225828  803761 notify.go:220] Checking for updates...
	I0815 01:27:30.228839  803761 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:27:30.230616  803761 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 01:27:30.233073  803761 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	I0815 01:27:30.235130  803761 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 01:27:30.236762  803761 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:27:30.238942  803761 config.go:182] Loaded profile config "no-preload-891255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 01:27:30.239507  803761 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:27:30.262308  803761 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 01:27:30.262449  803761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 01:27:30.352647  803761 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 01:27:30.342788552 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 01:27:30.352759  803761 docker.go:307] overlay module found
	I0815 01:27:30.355397  803761 out.go:177] * Using the docker driver based on existing profile
	I0815 01:27:30.358421  803761 start.go:297] selected driver: docker
	I0815 01:27:30.358441  803761 start.go:901] validating driver "docker" against &{Name:no-preload-891255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-891255 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:27:30.358536  803761 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:27:30.359361  803761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 01:27:30.427324  803761 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 01:27:30.41837525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 01:27:30.427717  803761 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:27:30.427775  803761 cni.go:84] Creating CNI manager for ""
	I0815 01:27:30.427791  803761 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 01:27:30.427896  803761 start.go:340] cluster config:
	{Name:no-preload-891255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-891255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:27:30.430593  803761 out.go:177] * Starting "no-preload-891255" primary control-plane node in "no-preload-891255" cluster
	I0815 01:27:30.432721  803761 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0815 01:27:30.434488  803761 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 01:27:30.436306  803761 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 01:27:30.436385  803761 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 01:27:30.436456  803761 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/config.json ...
	I0815 01:27:30.436939  803761 cache.go:107] acquiring lock: {Name:mk377bae187156e25e03e74dc26a8fc6b664c12f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:27:30.437025  803761 cache.go:115] /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0815 01:27:30.437039  803761 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 108.956µs
	I0815 01:27:30.437048  803761 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0815 01:27:30.437058  803761 cache.go:107] acquiring lock: {Name:mkfa33fd52f81b72095b4a245252d467804de46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:27:30.437094  803761 cache.go:115] /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0815 01:27:30.437104  803761 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 47.212µs
	I0815 01:27:30.437111  803761 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0815 01:27:30.437121  803761 cache.go:107] acquiring lock: {Name:mkc20c405e053840f9847268b96de1815177ab97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:27:30.437152  803761 cache.go:115] /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0815 01:27:30.437162  803761 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 42.626µs
	I0815 01:27:30.437220  803761 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0815 01:27:30.437232  803761 cache.go:107] acquiring lock: {Name:mk516ca1588a985afdddcbddef9d2e8e029d52ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:27:30.437338  803761 cache.go:107] acquiring lock: {Name:mk571f6e7ebb7534db4e04b364c3f2c216aedb9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:27:30.437388  803761 cache.go:115] /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0815 01:27:30.437400  803761 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 65.231µs
	I0815 01:27:30.437406  803761 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0815 01:27:30.437416  803761 cache.go:107] acquiring lock: {Name:mk1e6bd7ba33ff5f26626ccd6b8c25445619a36a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:27:30.437448  803761 cache.go:115] /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0815 01:27:30.437457  803761 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 42.305µs
	I0815 01:27:30.437464  803761 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0815 01:27:30.437475  803761 cache.go:107] acquiring lock: {Name:mkc84118c26dca31c17232db577c88bba77012fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:27:30.437505  803761 cache.go:115] /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0815 01:27:30.437515  803761 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 41.378µs
	I0815 01:27:30.437521  803761 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0815 01:27:30.437531  803761 cache.go:107] acquiring lock: {Name:mk72ede4086bb42d67cd2dd687eaa1c3b51091cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:27:30.437594  803761 cache.go:115] /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0815 01:27:30.437608  803761 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 78.179µs
	I0815 01:27:30.437616  803761 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0815 01:27:30.437639  803761 cache.go:115] /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0815 01:27:30.437649  803761 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 418.904µs
	I0815 01:27:30.437655  803761 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /home/jenkins/minikube-integration/19443-587265/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0815 01:27:30.437691  803761 cache.go:87] Successfully saved all images to host disk.
	W0815 01:27:30.455772  803761 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 is of wrong architecture
	I0815 01:27:30.455794  803761 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 01:27:30.455975  803761 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 01:27:30.456590  803761 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 01:27:30.456601  803761 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 01:27:30.456613  803761 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 01:27:30.456619  803761 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 01:27:30.588228  803761 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 01:27:30.588267  803761 cache.go:194] Successfully downloaded all kic artifacts
	I0815 01:27:30.588311  803761 start.go:360] acquireMachinesLock for no-preload-891255: {Name:mkbcde4c98f1feca8d388937567a29bcafc800e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:27:30.588410  803761 start.go:364] duration metric: took 74.839µs to acquireMachinesLock for "no-preload-891255"
	I0815 01:27:30.588434  803761 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:27:30.588445  803761 fix.go:54] fixHost starting: 
	I0815 01:27:30.588725  803761 cli_runner.go:164] Run: docker container inspect no-preload-891255 --format={{.State.Status}}
	I0815 01:27:30.604885  803761 fix.go:112] recreateIfNeeded on no-preload-891255: state=Stopped err=<nil>
	W0815 01:27:30.604918  803761 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:27:30.607211  803761 out.go:177] * Restarting existing docker container for "no-preload-891255" ...
	I0815 01:27:30.326045  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:32.326505  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:30.609064  803761 cli_runner.go:164] Run: docker start no-preload-891255
	I0815 01:27:30.946277  803761 cli_runner.go:164] Run: docker container inspect no-preload-891255 --format={{.State.Status}}
	I0815 01:27:30.973780  803761 kic.go:430] container "no-preload-891255" state is running.
	I0815 01:27:30.975532  803761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891255
	I0815 01:27:30.997578  803761 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/config.json ...
	I0815 01:27:30.997806  803761 machine.go:94] provisionDockerMachine start ...
	I0815 01:27:30.997872  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:31.020856  803761 main.go:141] libmachine: Using SSH client type: native
	I0815 01:27:31.021125  803761 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33810 <nil> <nil>}
	I0815 01:27:31.021140  803761 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:27:31.021719  803761 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45724->127.0.0.1:33810: read: connection reset by peer
	I0815 01:27:34.159503  803761 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-891255
	
	I0815 01:27:34.159591  803761 ubuntu.go:169] provisioning hostname "no-preload-891255"
	I0815 01:27:34.159697  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:34.176612  803761 main.go:141] libmachine: Using SSH client type: native
	I0815 01:27:34.176871  803761 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33810 <nil> <nil>}
	I0815 01:27:34.176890  803761 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-891255 && echo "no-preload-891255" | sudo tee /etc/hostname
	I0815 01:27:34.331324  803761 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-891255
	
	I0815 01:27:34.331489  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:34.350487  803761 main.go:141] libmachine: Using SSH client type: native
	I0815 01:27:34.350750  803761 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33810 <nil> <nil>}
	I0815 01:27:34.350767  803761 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-891255' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-891255/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-891255' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:27:34.488067  803761 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:27:34.488103  803761 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19443-587265/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-587265/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-587265/.minikube}
	I0815 01:27:34.488127  803761 ubuntu.go:177] setting up certificates
	I0815 01:27:34.488137  803761 provision.go:84] configureAuth start
	I0815 01:27:34.488196  803761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891255
	I0815 01:27:34.505440  803761 provision.go:143] copyHostCerts
	I0815 01:27:34.505507  803761 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-587265/.minikube/ca.pem, removing ...
	I0815 01:27:34.505523  803761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-587265/.minikube/ca.pem
	I0815 01:27:34.505607  803761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-587265/.minikube/ca.pem (1082 bytes)
	I0815 01:27:34.505727  803761 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-587265/.minikube/cert.pem, removing ...
	I0815 01:27:34.505740  803761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-587265/.minikube/cert.pem
	I0815 01:27:34.505770  803761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-587265/.minikube/cert.pem (1123 bytes)
	I0815 01:27:34.505839  803761 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-587265/.minikube/key.pem, removing ...
	I0815 01:27:34.505849  803761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-587265/.minikube/key.pem
	I0815 01:27:34.505875  803761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-587265/.minikube/key.pem (1675 bytes)
	I0815 01:27:34.505937  803761 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-587265/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca-key.pem org=jenkins.no-preload-891255 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-891255]
	I0815 01:27:34.765073  803761 provision.go:177] copyRemoteCerts
	I0815 01:27:34.765164  803761 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:27:34.765236  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:34.782604  803761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/no-preload-891255/id_rsa Username:docker}
	I0815 01:27:34.882448  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 01:27:34.911055  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 01:27:34.938186  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:27:34.970893  803761 provision.go:87] duration metric: took 482.741715ms to configureAuth
	I0815 01:27:34.970922  803761 ubuntu.go:193] setting minikube options for container-runtime
	I0815 01:27:34.971125  803761 config.go:182] Loaded profile config "no-preload-891255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 01:27:34.971137  803761 machine.go:97] duration metric: took 3.973319383s to provisionDockerMachine
	I0815 01:27:34.971145  803761 start.go:293] postStartSetup for "no-preload-891255" (driver="docker")
	I0815 01:27:34.971159  803761 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:27:34.971220  803761 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:27:34.971264  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:34.988187  803761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/no-preload-891255/id_rsa Username:docker}
	I0815 01:27:35.094103  803761 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:27:35.098107  803761 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 01:27:35.098147  803761 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 01:27:35.098160  803761 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 01:27:35.098173  803761 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 01:27:35.098208  803761 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-587265/.minikube/addons for local assets ...
	I0815 01:27:35.098292  803761 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-587265/.minikube/files for local assets ...
	I0815 01:27:35.098392  803761 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-587265/.minikube/files/etc/ssl/certs/5926602.pem -> 5926602.pem in /etc/ssl/certs
	I0815 01:27:35.098516  803761 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:27:35.108070  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/files/etc/ssl/certs/5926602.pem --> /etc/ssl/certs/5926602.pem (1708 bytes)
	I0815 01:27:35.134614  803761 start.go:296] duration metric: took 163.450011ms for postStartSetup
	I0815 01:27:35.134743  803761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 01:27:35.134837  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:35.153119  803761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/no-preload-891255/id_rsa Username:docker}
	I0815 01:27:35.249320  803761 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 01:27:35.253770  803761 fix.go:56] duration metric: took 4.665319499s for fixHost
	I0815 01:27:35.253797  803761 start.go:83] releasing machines lock for "no-preload-891255", held for 4.66537485s
	I0815 01:27:35.253884  803761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891255
	I0815 01:27:35.271001  803761 ssh_runner.go:195] Run: cat /version.json
	I0815 01:27:35.271066  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:35.271092  803761 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:27:35.271152  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:35.287966  803761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/no-preload-891255/id_rsa Username:docker}
	I0815 01:27:35.303109  803761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/no-preload-891255/id_rsa Username:docker}
	I0815 01:27:35.533499  803761 ssh_runner.go:195] Run: systemctl --version
	I0815 01:27:35.537985  803761 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 01:27:35.542537  803761 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0815 01:27:35.560110  803761 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0815 01:27:35.560217  803761 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:27:35.569395  803761 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 01:27:35.569475  803761 start.go:495] detecting cgroup driver to use...
	I0815 01:27:35.569521  803761 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 01:27:35.569611  803761 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 01:27:35.583051  803761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 01:27:35.595039  803761 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:27:35.595149  803761 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:27:35.609596  803761 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:27:35.623379  803761 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:27:35.707493  803761 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:27:35.795262  803761 docker.go:233] disabling docker service ...
	I0815 01:27:35.795328  803761 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:27:35.808774  803761 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:27:35.824144  803761 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:27:35.908852  803761 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:27:36.018182  803761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:27:36.031666  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:27:36.051746  803761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 01:27:36.063557  803761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 01:27:36.075101  803761 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 01:27:36.075219  803761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 01:27:36.087706  803761 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 01:27:36.098900  803761 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 01:27:36.109941  803761 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 01:27:36.121226  803761 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:27:36.137079  803761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 01:27:36.149018  803761 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 01:27:36.159749  803761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 01:27:36.170739  803761 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:27:36.180275  803761 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:27:36.189120  803761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:27:36.273204  803761 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 01:27:36.432963  803761 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0815 01:27:36.433038  803761 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0815 01:27:36.437282  803761 start.go:563] Will wait 60s for crictl version
	I0815 01:27:36.437357  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:27:36.440693  803761 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:27:36.484837  803761 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0815 01:27:36.484911  803761 ssh_runner.go:195] Run: containerd --version
	I0815 01:27:36.511673  803761 ssh_runner.go:195] Run: containerd --version
	I0815 01:27:36.541876  803761 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0815 01:27:36.543510  803761 cli_runner.go:164] Run: docker network inspect no-preload-891255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 01:27:36.579131  803761 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0815 01:27:36.588569  803761 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:27:36.600920  803761 kubeadm.go:883] updating cluster {Name:no-preload-891255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-891255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:27:36.601051  803761 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 01:27:36.601099  803761 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:27:36.645978  803761 containerd.go:627] all images are preloaded for containerd runtime.
	I0815 01:27:36.645999  803761 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:27:36.646008  803761 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.0 containerd true true} ...
	I0815 01:27:36.646118  803761 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-891255 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-891255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:27:36.646181  803761 ssh_runner.go:195] Run: sudo crictl info
	I0815 01:27:36.688709  803761 cni.go:84] Creating CNI manager for ""
	I0815 01:27:36.688737  803761 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 01:27:36.688750  803761 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:27:36.688775  803761 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-891255 NodeName:no-preload-891255 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:27:36.688924  803761 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-891255"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:27:36.689001  803761 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:27:36.699237  803761 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:27:36.699330  803761 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:27:36.709004  803761 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0815 01:27:36.728658  803761 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:27:36.747120  803761 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0815 01:27:36.772634  803761 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0815 01:27:36.776453  803761 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:27:36.792503  803761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:27:36.887664  803761 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:27:36.903374  803761 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255 for IP: 192.168.76.2
	I0815 01:27:36.903449  803761 certs.go:194] generating shared ca certs ...
	I0815 01:27:36.903481  803761 certs.go:226] acquiring lock for ca certs: {Name:mkd44da6bd4b219dfe871c9c58d5756252de3a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:27:36.903641  803761 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-587265/.minikube/ca.key
	I0815 01:27:36.903745  803761 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.key
	I0815 01:27:36.903773  803761 certs.go:256] generating profile certs ...
	I0815 01:27:36.903936  803761 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.key
	I0815 01:27:36.904039  803761 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/apiserver.key.69237bdd
	I0815 01:27:36.904102  803761 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/proxy-client.key
	I0815 01:27:36.904243  803761 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/592660.pem (1338 bytes)
	W0815 01:27:36.904312  803761 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-587265/.minikube/certs/592660_empty.pem, impossibly tiny 0 bytes
	I0815 01:27:36.904345  803761 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 01:27:36.904396  803761 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/ca.pem (1082 bytes)
	I0815 01:27:36.904448  803761 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:27:36.904499  803761 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/certs/key.pem (1675 bytes)
	I0815 01:27:36.904575  803761 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-587265/.minikube/files/etc/ssl/certs/5926602.pem (1708 bytes)
	I0815 01:27:36.905247  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:27:36.931179  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 01:27:36.962131  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:27:36.987576  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:27:37.022483  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 01:27:37.060458  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 01:27:37.110653  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:27:37.152558  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:27:37.179801  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/files/etc/ssl/certs/5926602.pem --> /usr/share/ca-certificates/5926602.pem (1708 bytes)
	I0815 01:27:37.209981  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:27:37.239473  803761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-587265/.minikube/certs/592660.pem --> /usr/share/ca-certificates/592660.pem (1338 bytes)
	I0815 01:27:37.266845  803761 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:27:37.287391  803761 ssh_runner.go:195] Run: openssl version
	I0815 01:27:37.295180  803761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5926602.pem && ln -fs /usr/share/ca-certificates/5926602.pem /etc/ssl/certs/5926602.pem"
	I0815 01:27:37.306727  803761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5926602.pem
	I0815 01:27:37.310424  803761 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:47 /usr/share/ca-certificates/5926602.pem
	I0815 01:27:37.310536  803761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5926602.pem
	I0815 01:27:37.320089  803761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5926602.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:27:37.333178  803761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:27:37.343331  803761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:27:37.347389  803761 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:36 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:27:37.347460  803761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:27:37.355090  803761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:27:37.365007  803761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/592660.pem && ln -fs /usr/share/ca-certificates/592660.pem /etc/ssl/certs/592660.pem"
	I0815 01:27:37.374829  803761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/592660.pem
	I0815 01:27:37.378492  803761 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:47 /usr/share/ca-certificates/592660.pem
	I0815 01:27:37.378559  803761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/592660.pem
	I0815 01:27:37.386206  803761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/592660.pem /etc/ssl/certs/51391683.0"
	I0815 01:27:37.395770  803761 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:27:37.399598  803761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:27:37.406724  803761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:27:37.414299  803761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:27:37.421554  803761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:27:37.429092  803761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:27:37.437083  803761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:27:37.444633  803761 kubeadm.go:392] StartCluster: {Name:no-preload-891255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-891255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:27:37.444736  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0815 01:27:37.444817  803761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:27:37.492095  803761 cri.go:89] found id: "ba8534f9d63589367129cbff9b8d4f2498b7f86093c708dd66e2ccf978b436f1"
	I0815 01:27:37.492173  803761 cri.go:89] found id: "5a854e83e9a5556e1a2a9298080f95e0ec2337600228c065637e73d90913c7cd"
	I0815 01:27:37.492193  803761 cri.go:89] found id: "2d448a62ba9b867bd5bd34037fec9b78055264519f2d2574275038fbe92e1f7e"
	I0815 01:27:37.492203  803761 cri.go:89] found id: "f3022d73668343c2effb73b4ac408bd5f9d53392de4778783676da2189835e39"
	I0815 01:27:37.492217  803761 cri.go:89] found id: "adc2315cf56607b40babf5046072a405155130dd7cd2b6c3e46f88da9eaa92f7"
	I0815 01:27:37.492224  803761 cri.go:89] found id: "d9f6521d16507f27bd6386ff17f8b963c257f216da7f467c9210fb9178524678"
	I0815 01:27:37.492227  803761 cri.go:89] found id: "2b053d2cfe8aa99ea78837e3b3ce749a8c262ba3833f5bc139cbaa2c7657cb22"
	I0815 01:27:37.492231  803761 cri.go:89] found id: "b690b96148f2277712ec84a51bec7a82401a426f009a95180cc964bb26bea9fb"
	I0815 01:27:37.492239  803761 cri.go:89] found id: ""
	I0815 01:27:37.492294  803761 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0815 01:27:37.507684  803761 cri.go:116] JSON = null
	W0815 01:27:37.507794  803761 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0815 01:27:37.507934  803761 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:27:37.521514  803761 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:27:37.521581  803761 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:27:37.521656  803761 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:27:37.533796  803761 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:27:37.534503  803761 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-891255" does not appear in /home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 01:27:37.534840  803761 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-587265/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-891255" cluster setting kubeconfig missing "no-preload-891255" context setting]
	I0815 01:27:37.535367  803761 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/kubeconfig: {Name:mka65351b6674d2edd84b4cf38d527ec03739af8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:27:37.537063  803761 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:27:37.548715  803761 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0815 01:27:37.548796  803761 kubeadm.go:597] duration metric: took 27.194048ms to restartPrimaryControlPlane
	I0815 01:27:37.548820  803761 kubeadm.go:394] duration metric: took 104.197866ms to StartCluster
	I0815 01:27:37.548867  803761 settings.go:142] acquiring lock: {Name:mkf353d296e2684cbdd29a016c10a0eb45e9f213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:27:37.548977  803761 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 01:27:37.550010  803761 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/kubeconfig: {Name:mka65351b6674d2edd84b4cf38d527ec03739af8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:27:37.550303  803761 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0815 01:27:37.550814  803761 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:27:37.550891  803761 addons.go:69] Setting storage-provisioner=true in profile "no-preload-891255"
	I0815 01:27:37.550913  803761 addons.go:234] Setting addon storage-provisioner=true in "no-preload-891255"
	W0815 01:27:37.550919  803761 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:27:37.550942  803761 host.go:66] Checking if "no-preload-891255" exists ...
	I0815 01:27:37.551502  803761 cli_runner.go:164] Run: docker container inspect no-preload-891255 --format={{.State.Status}}
	I0815 01:27:37.551798  803761 config.go:182] Loaded profile config "no-preload-891255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 01:27:37.551955  803761 addons.go:69] Setting metrics-server=true in profile "no-preload-891255"
	I0815 01:27:37.552009  803761 addons.go:234] Setting addon metrics-server=true in "no-preload-891255"
	W0815 01:27:37.552063  803761 addons.go:243] addon metrics-server should already be in state true
	I0815 01:27:37.552105  803761 host.go:66] Checking if "no-preload-891255" exists ...
	I0815 01:27:37.552639  803761 cli_runner.go:164] Run: docker container inspect no-preload-891255 --format={{.State.Status}}
	I0815 01:27:37.555148  803761 out.go:177] * Verifying Kubernetes components...
	I0815 01:27:37.555398  803761 addons.go:69] Setting default-storageclass=true in profile "no-preload-891255"
	I0815 01:27:37.555431  803761 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-891255"
	I0815 01:27:37.556297  803761 cli_runner.go:164] Run: docker container inspect no-preload-891255 --format={{.State.Status}}
	I0815 01:27:37.556648  803761 addons.go:69] Setting dashboard=true in profile "no-preload-891255"
	I0815 01:27:37.557025  803761 addons.go:234] Setting addon dashboard=true in "no-preload-891255"
	W0815 01:27:37.558157  803761 addons.go:243] addon dashboard should already be in state true
	I0815 01:27:37.558249  803761 host.go:66] Checking if "no-preload-891255" exists ...
	I0815 01:27:37.560283  803761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:27:37.561161  803761 cli_runner.go:164] Run: docker container inspect no-preload-891255 --format={{.State.Status}}
	I0815 01:27:37.617832  803761 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:27:37.622897  803761 addons.go:234] Setting addon default-storageclass=true in "no-preload-891255"
	W0815 01:27:37.622920  803761 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:27:37.622944  803761 host.go:66] Checking if "no-preload-891255" exists ...
	I0815 01:27:37.623413  803761 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:27:37.623436  803761 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:27:37.623509  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:37.623985  803761 cli_runner.go:164] Run: docker container inspect no-preload-891255 --format={{.State.Status}}
	I0815 01:27:37.638249  803761 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0815 01:27:37.638364  803761 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:27:37.640551  803761 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0815 01:27:34.824704  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:37.324127  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:39.325365  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:37.640662  803761 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:27:37.640676  803761 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:27:37.640754  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:37.642155  803761 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0815 01:27:37.642183  803761 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0815 01:27:37.642249  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:37.698773  803761 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:27:37.698796  803761 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:27:37.698857  803761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891255
	I0815 01:27:37.717506  803761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/no-preload-891255/id_rsa Username:docker}
	I0815 01:27:37.723999  803761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/no-preload-891255/id_rsa Username:docker}
	I0815 01:27:37.733964  803761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/no-preload-891255/id_rsa Username:docker}
	I0815 01:27:37.747263  803761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/no-preload-891255/id_rsa Username:docker}
	I0815 01:27:37.799926  803761 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:27:37.870916  803761 node_ready.go:35] waiting up to 6m0s for node "no-preload-891255" to be "Ready" ...
	I0815 01:27:37.915707  803761 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0815 01:27:37.915730  803761 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0815 01:27:37.996627  803761 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0815 01:27:37.996659  803761 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0815 01:27:38.028878  803761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:27:38.055598  803761 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:27:38.055640  803761 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:27:38.071588  803761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:27:38.225178  803761 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0815 01:27:38.225258  803761 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0815 01:27:38.269345  803761 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:27:38.269376  803761 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:27:38.296602  803761 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0815 01:27:38.296629  803761 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0815 01:27:38.353701  803761 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:27:38.353738  803761 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:27:38.382420  803761 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0815 01:27:38.382458  803761 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0815 01:27:38.514518  803761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:27:38.560916  803761 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0815 01:27:38.560995  803761 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0815 01:27:38.688554  803761 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0815 01:27:38.688628  803761 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0815 01:27:38.734488  803761 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0815 01:27:38.734567  803761 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0815 01:27:38.840227  803761 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 01:27:38.840302  803761 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0815 01:27:38.896730  803761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 01:27:41.839092  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:44.325718  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:43.155439  803761 node_ready.go:49] node "no-preload-891255" has status "Ready":"True"
	I0815 01:27:43.155464  803761 node_ready.go:38] duration metric: took 5.284505523s for node "no-preload-891255" to be "Ready" ...
	I0815 01:27:43.155474  803761 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:27:43.236802  803761 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-q45lr" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:43.297601  803761 pod_ready.go:92] pod "coredns-6f6b679f8f-q45lr" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:43.297671  803761 pod_ready.go:81] duration metric: took 60.793612ms for pod "coredns-6f6b679f8f-q45lr" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:43.297701  803761 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-891255" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:43.341163  803761 pod_ready.go:92] pod "etcd-no-preload-891255" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:43.341239  803761 pod_ready.go:81] duration metric: took 43.509638ms for pod "etcd-no-preload-891255" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:43.341270  803761 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-891255" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:43.367071  803761 pod_ready.go:92] pod "kube-apiserver-no-preload-891255" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:43.367144  803761 pod_ready.go:81] duration metric: took 25.852612ms for pod "kube-apiserver-no-preload-891255" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:43.367176  803761 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-891255" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:43.400093  803761 pod_ready.go:92] pod "kube-controller-manager-no-preload-891255" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:43.400165  803761 pod_ready.go:81] duration metric: took 32.96864ms for pod "kube-controller-manager-no-preload-891255" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:43.400216  803761 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7k9tc" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:45.406853  803761 pod_ready.go:102] pod "kube-proxy-7k9tc" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:45.906165  803761 pod_ready.go:92] pod "kube-proxy-7k9tc" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:45.906245  803761 pod_ready.go:81] duration metric: took 2.506004597s for pod "kube-proxy-7k9tc" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:45.906277  803761 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-891255" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:46.133972  803761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.10505343s)
	I0815 01:27:46.134095  803761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.062468966s)
	W0815 01:27:46.134352  803761 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0815 01:27:46.134174  803761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.619577449s)
	I0815 01:27:46.134411  803761 addons.go:475] Verifying addon metrics-server=true in "no-preload-891255"
	I0815 01:27:46.134272  803761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.237510537s)
	I0815 01:27:46.134537  803761 retry.go:31] will retry after 160.382742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0815 01:27:46.136513  803761 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-891255 addons enable metrics-server
	
	I0815 01:27:46.295970  803761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:27:46.489160  803761 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0815 01:27:46.824159  798414 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:48.323803  798414 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:48.323828  798414 pod_ready.go:81] duration metric: took 1m9.006777258s for pod "kube-controller-manager-old-k8s-version-145466" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:48.323840  798414 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hdj25" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:48.328838  798414 pod_ready.go:92] pod "kube-proxy-hdj25" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:48.328866  798414 pod_ready.go:81] duration metric: took 4.98266ms for pod "kube-proxy-hdj25" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:48.328878  798414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-145466" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:46.490615  803761 addons.go:510] duration metric: took 8.939796419s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0815 01:27:47.913124  803761 pod_ready.go:102] pod "kube-scheduler-no-preload-891255" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:50.337069  798414 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:52.836311  798414 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:54.334906  798414 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-145466" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:54.334933  798414 pod_ready.go:81] duration metric: took 6.006014766s for pod "kube-scheduler-old-k8s-version-145466" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:54.334945  798414 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:50.413174  803761 pod_ready.go:102] pod "kube-scheduler-no-preload-891255" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:52.413472  803761 pod_ready.go:102] pod "kube-scheduler-no-preload-891255" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:54.914461  803761 pod_ready.go:102] pod "kube-scheduler-no-preload-891255" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:56.341453  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:58.841137  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:27:57.420989  803761 pod_ready.go:92] pod "kube-scheduler-no-preload-891255" in "kube-system" namespace has status "Ready":"True"
	I0815 01:27:57.421017  803761 pod_ready.go:81] duration metric: took 11.514711642s for pod "kube-scheduler-no-preload-891255" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:57.421029  803761 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace to be "Ready" ...
	I0815 01:27:59.426644  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:01.341468  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:03.342277  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:01.428229  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:03.927438  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:05.841177  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:07.841420  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:06.428013  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:08.927156  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:09.841515  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:11.841903  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:14.341251  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:10.927748  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:12.927949  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:16.341817  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:18.905274  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:15.428036  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:17.927452  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:21.340938  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:23.343888  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:20.427658  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:22.926849  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:24.927327  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:25.840886  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:27.841307  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:27.427742  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:29.928619  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:29.842382  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:31.845262  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:34.340679  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:32.426860  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:34.928177  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:36.340926  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:38.348235  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:37.426923  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:39.427676  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:40.841788  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:43.341955  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:41.927220  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:44.427318  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:45.343754  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:47.841262  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:46.427528  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:48.428585  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:50.341583  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:52.342360  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:50.927664  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:52.927962  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:54.928112  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:54.842055  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:57.341767  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:57.426776  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:59.427474  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:28:59.842110  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:02.341631  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:01.428036  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:03.927170  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:04.841905  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:07.341412  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:09.361881  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:05.927728  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:08.427194  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:11.840761  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:13.845821  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:10.926907  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:12.927545  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:14.927893  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:16.341158  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:18.341582  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:17.427649  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:19.428474  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:20.342140  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:22.842042  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:21.927110  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:24.427410  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:24.842165  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:27.341302  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:26.927017  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:28.927263  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:29.844818  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:32.340995  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:34.341661  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:30.927788  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:33.427114  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:36.841766  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:39.342526  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:35.427179  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:37.927113  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:41.841158  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:43.842484  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:40.427490  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:42.927907  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:44.928392  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:46.341306  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:48.840994  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:47.426868  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:49.927329  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.842218  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:53.341747  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:52.426796  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:54.426847  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:55.841257  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:58.341465  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:56.927611  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:59.427411  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:00.380117  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:02.840765  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:01.427446  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:03.427927  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.341086  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:07.341145  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.341333  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.927282  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:08.427203  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.342252  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:13.841939  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:10.427470  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:12.427509  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:14.427889  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:16.345255  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.843266  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:16.428131  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.927570  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:21.341071  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.341252  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:21.426786  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.427241  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:25.341574  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:27.842623  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:25.427318  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:27.928194  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:30.341285  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.342057  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:30.427166  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.926803  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:34.927877  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:34.841107  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:36.841164  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:38.841855  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:37.426777  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.427906  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:41.340950  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:43.341511  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:41.927037  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:43.927338  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:45.342593  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.840983  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:46.426893  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:48.927895  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:49.842312  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:52.341179  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:51.427096  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.427519  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:54.841042  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:56.841647  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:59.340948  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:55.927137  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:58.426574  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:01.840886  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:03.841711  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.427607  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.926729  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:06.341111  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:08.845348  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.428060  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:07.927610  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.340908  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:13.340956  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:10.428551  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:12.927001  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.927235  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.341002  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:17.342150  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:16.927528  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.927597  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:19.841230  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.841934  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:24.341985  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.428065  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.926563  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:26.841205  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.841630  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:25.927125  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.927322  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:29.927384  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:31.341383  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:33.841991  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.426491  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.427005  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.341927  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:38.840908  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.427436  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:38.926916  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:41.340767  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:43.341898  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:40.927564  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.927616  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.927766  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.342464  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:47.842026  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:47.428985  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:49.926419  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:50.341226  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:52.841594  798414 pod_ready.go:102] pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:54.341770  798414 pod_ready.go:81] duration metric: took 4m0.006810385s for pod "metrics-server-9975d5f86-qvcw4" in "kube-system" namespace to be "Ready" ...
	E0815 01:31:54.341793  798414 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 01:31:54.341801  798414 pod_ready.go:38] duration metric: took 5m26.123866447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:31:54.341815  798414 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:31:54.341844  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:54.341906  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:54.389104  798414 cri.go:89] found id: "5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645"
	I0815 01:31:54.389129  798414 cri.go:89] found id: "d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f"
	I0815 01:31:54.389135  798414 cri.go:89] found id: ""
	I0815 01:31:54.389142  798414 logs.go:276] 2 containers: [5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645 d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f]
	I0815 01:31:54.389202  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.392818  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.396470  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0815 01:31:54.396573  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:54.440719  798414 cri.go:89] found id: "1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43"
	I0815 01:31:54.440738  798414 cri.go:89] found id: "149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d"
	I0815 01:31:54.440744  798414 cri.go:89] found id: ""
	I0815 01:31:54.440751  798414 logs.go:276] 2 containers: [1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43 149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d]
	I0815 01:31:54.440852  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.444415  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.447805  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0815 01:31:54.447916  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:54.487638  798414 cri.go:89] found id: "08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9"
	I0815 01:31:54.487661  798414 cri.go:89] found id: "56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4"
	I0815 01:31:54.487666  798414 cri.go:89] found id: ""
	I0815 01:31:54.487673  798414 logs.go:276] 2 containers: [08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9 56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4]
	I0815 01:31:54.487731  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.491365  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.494774  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:54.494846  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:54.556835  798414 cri.go:89] found id: "ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8"
	I0815 01:31:54.556860  798414 cri.go:89] found id: "f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d"
	I0815 01:31:54.556865  798414 cri.go:89] found id: ""
	I0815 01:31:54.556873  798414 logs.go:276] 2 containers: [ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8 f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d]
	I0815 01:31:54.556935  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.560493  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.563824  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:54.563938  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:51.927026  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:54.427978  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:54.604088  798414 cri.go:89] found id: "65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418"
	I0815 01:31:54.604161  798414 cri.go:89] found id: "8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5"
	I0815 01:31:54.604173  798414 cri.go:89] found id: ""
	I0815 01:31:54.604181  798414 logs.go:276] 2 containers: [65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418 8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5]
	I0815 01:31:54.604282  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.607763  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.611024  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:54.611091  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:54.649541  798414 cri.go:89] found id: "30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd"
	I0815 01:31:54.649565  798414 cri.go:89] found id: "d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f"
	I0815 01:31:54.649570  798414 cri.go:89] found id: ""
	I0815 01:31:54.649577  798414 logs.go:276] 2 containers: [30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f]
	I0815 01:31:54.649635  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.653540  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.656990  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:54.657061  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:54.705048  798414 cri.go:89] found id: "6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b"
	I0815 01:31:54.705073  798414 cri.go:89] found id: "8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1"
	I0815 01:31:54.705078  798414 cri.go:89] found id: ""
	I0815 01:31:54.705086  798414 logs.go:276] 2 containers: [6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b 8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1]
	I0815 01:31:54.705142  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.708794  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.712050  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:54.712119  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:54.761867  798414 cri.go:89] found id: "666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6"
	I0815 01:31:54.761892  798414 cri.go:89] found id: ""
	I0815 01:31:54.761901  798414 logs.go:276] 1 containers: [666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6]
	I0815 01:31:54.761956  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.765515  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:31:54.765592  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:31:54.814259  798414 cri.go:89] found id: "fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3"
	I0815 01:31:54.814287  798414 cri.go:89] found id: "6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414"
	I0815 01:31:54.814292  798414 cri.go:89] found id: ""
	I0815 01:31:54.814299  798414 logs.go:276] 2 containers: [fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3 6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414]
	I0815 01:31:54.814357  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.818624  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:31:54.822284  798414 logs.go:123] Gathering logs for kube-scheduler [f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d] ...
	I0815 01:31:54.822315  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d"
	I0815 01:31:54.866675  798414 logs.go:123] Gathering logs for kindnet [6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b] ...
	I0815 01:31:54.866707  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b"
	I0815 01:31:54.927529  798414 logs.go:123] Gathering logs for etcd [1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43] ...
	I0815 01:31:54.927578  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43"
	I0815 01:31:54.977111  798414 logs.go:123] Gathering logs for coredns [08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9] ...
	I0815 01:31:54.977140  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9"
	I0815 01:31:55.029599  798414 logs.go:123] Gathering logs for kube-proxy [65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418] ...
	I0815 01:31:55.029630  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418"
	I0815 01:31:55.072848  798414 logs.go:123] Gathering logs for kube-controller-manager [30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd] ...
	I0815 01:31:55.072876  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd"
	I0815 01:31:55.141657  798414 logs.go:123] Gathering logs for kube-controller-manager [d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f] ...
	I0815 01:31:55.141692  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f"
	I0815 01:31:55.207613  798414 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:55.207649  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 01:31:55.272355  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.206496     664 reflector.go:138] object-"kube-system"/"coredns-token-hqq2w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-hqq2w" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.272573  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.206838     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.272786  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207185     664 reflector.go:138] object-"kube-system"/"kindnet-token-pflml": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-pflml" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.273001  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207498     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-rl52n": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-rl52n" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.273206  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207678     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.273517  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.264938     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-k6pcd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-k6pcd" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.273725  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.114283     664 reflector.go:138] object-"default"/"default-token-j4wgn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-j4wgn" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.276976  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:29 old-k8s-version-145466 kubelet[664]: E0815 01:26:29.390721     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:31:55.278325  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:29 old-k8s-version-145466 kubelet[664]: E0815 01:26:29.415771     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.281690  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:42 old-k8s-version-145466 kubelet[664]: E0815 01:26:42.146871     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:31:55.282438  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:49 old-k8s-version-145466 kubelet[664]: E0815 01:26:49.166497     664 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-fksp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-fksp6" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:31:55.283970  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:57 old-k8s-version-145466 kubelet[664]: E0815 01:26:57.141841     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.284430  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:00 old-k8s-version-145466 kubelet[664]: E0815 01:27:00.751687     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.284885  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:01 old-k8s-version-145466 kubelet[664]: E0815 01:27:01.756479     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.285325  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:02 old-k8s-version-145466 kubelet[664]: E0815 01:27:02.761739     664 pod_workers.go:191] Error syncing pod d7e84c38-c90e-427c-bfc5-45adf788d6fe ("storage-provisioner_kube-system(d7e84c38-c90e-427c-bfc5-45adf788d6fe)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d7e84c38-c90e-427c-bfc5-45adf788d6fe)"
	W0815 01:31:55.285983  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:08 old-k8s-version-145466 kubelet[664]: E0815 01:27:08.528394     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.288438  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:11 old-k8s-version-145466 kubelet[664]: E0815 01:27:11.140512     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:31:55.289156  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:20 old-k8s-version-145466 kubelet[664]: E0815 01:27:20.809846     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.289339  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:24 old-k8s-version-145466 kubelet[664]: E0815 01:27:24.133897     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.289665  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:28 old-k8s-version-145466 kubelet[664]: E0815 01:27:28.528406     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.289850  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:38 old-k8s-version-145466 kubelet[664]: E0815 01:27:38.186907     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.290435  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:42 old-k8s-version-145466 kubelet[664]: E0815 01:27:42.877822     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.290766  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:48 old-k8s-version-145466 kubelet[664]: E0815 01:27:48.529042     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.293220  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:53 old-k8s-version-145466 kubelet[664]: E0815 01:27:53.141943     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:31:55.293549  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:02 old-k8s-version-145466 kubelet[664]: E0815 01:28:02.132198     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.293733  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:04 old-k8s-version-145466 kubelet[664]: E0815 01:28:04.133346     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.293928  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:16 old-k8s-version-145466 kubelet[664]: E0815 01:28:16.144812     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.294252  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:17 old-k8s-version-145466 kubelet[664]: E0815 01:28:17.132160     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.294839  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:29 old-k8s-version-145466 kubelet[664]: E0815 01:28:29.028736     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.295025  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:31 old-k8s-version-145466 kubelet[664]: E0815 01:28:31.132458     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.295352  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:38 old-k8s-version-145466 kubelet[664]: E0815 01:28:38.528400     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.295538  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:44 old-k8s-version-145466 kubelet[664]: E0815 01:28:44.132479     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.295872  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:51 old-k8s-version-145466 kubelet[664]: E0815 01:28:51.133407     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.296062  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:56 old-k8s-version-145466 kubelet[664]: E0815 01:28:56.135291     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.296387  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:04 old-k8s-version-145466 kubelet[664]: E0815 01:29:04.132344     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.296569  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:10 old-k8s-version-145466 kubelet[664]: E0815 01:29:10.132536     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.296893  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:17 old-k8s-version-145466 kubelet[664]: E0815 01:29:17.132162     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.299321  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:23 old-k8s-version-145466 kubelet[664]: E0815 01:29:23.139752     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:31:55.299673  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:28 old-k8s-version-145466 kubelet[664]: E0815 01:29:28.139064     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.299863  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:34 old-k8s-version-145466 kubelet[664]: E0815 01:29:34.132664     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.300190  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:40 old-k8s-version-145466 kubelet[664]: E0815 01:29:40.133096     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.300379  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:49 old-k8s-version-145466 kubelet[664]: E0815 01:29:49.133068     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.300968  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:51 old-k8s-version-145466 kubelet[664]: E0815 01:29:51.281771     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.301293  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:58 old-k8s-version-145466 kubelet[664]: E0815 01:29:58.528848     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.301485  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:00 old-k8s-version-145466 kubelet[664]: E0815 01:30:00.160446     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.301814  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:10 old-k8s-version-145466 kubelet[664]: E0815 01:30:10.135730     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.301998  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:11 old-k8s-version-145466 kubelet[664]: E0815 01:30:11.132453     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.302337  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:22 old-k8s-version-145466 kubelet[664]: E0815 01:30:22.132670     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.302530  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:26 old-k8s-version-145466 kubelet[664]: E0815 01:30:26.135386     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.302859  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:33 old-k8s-version-145466 kubelet[664]: E0815 01:30:33.132520     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.303041  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:41 old-k8s-version-145466 kubelet[664]: E0815 01:30:41.132604     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.303369  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:45 old-k8s-version-145466 kubelet[664]: E0815 01:30:45.132399     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.303551  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:54 old-k8s-version-145466 kubelet[664]: E0815 01:30:54.132932     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.303895  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:56 old-k8s-version-145466 kubelet[664]: E0815 01:30:56.132938     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.304080  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:06 old-k8s-version-145466 kubelet[664]: E0815 01:31:06.132954     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.304404  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:11 old-k8s-version-145466 kubelet[664]: E0815 01:31:11.132193     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.304589  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:20 old-k8s-version-145466 kubelet[664]: E0815 01:31:20.132734     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.304913  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:23 old-k8s-version-145466 kubelet[664]: E0815 01:31:23.132188     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.305096  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:31 old-k8s-version-145466 kubelet[664]: E0815 01:31:31.132519     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.305420  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:38 old-k8s-version-145466 kubelet[664]: E0815 01:31:38.133982     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.305603  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:42 old-k8s-version-145466 kubelet[664]: E0815 01:31:42.132790     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:55.305928  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.132720     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:55.306109  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.133955     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0815 01:31:55.306119  798414 logs.go:123] Gathering logs for kube-apiserver [5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645] ...
	I0815 01:31:55.306135  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645"
	I0815 01:31:55.368137  798414 logs.go:123] Gathering logs for kindnet [8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1] ...
	I0815 01:31:55.368175  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1"
	I0815 01:31:55.418716  798414 logs.go:123] Gathering logs for kubernetes-dashboard [666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6] ...
	I0815 01:31:55.418749  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6"
	I0815 01:31:55.463235  798414 logs.go:123] Gathering logs for storage-provisioner [fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3] ...
	I0815 01:31:55.463266  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3"
	I0815 01:31:55.524586  798414 logs.go:123] Gathering logs for container status ...
	I0815 01:31:55.524614  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:55.586139  798414 logs.go:123] Gathering logs for etcd [149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d] ...
	I0815 01:31:55.586167  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d"
	I0815 01:31:55.628448  798414 logs.go:123] Gathering logs for kube-scheduler [ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8] ...
	I0815 01:31:55.628476  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8"
	I0815 01:31:55.667804  798414 logs.go:123] Gathering logs for kube-apiserver [d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f] ...
	I0815 01:31:55.667828  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f"
	I0815 01:31:55.730471  798414 logs.go:123] Gathering logs for coredns [56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4] ...
	I0815 01:31:55.730513  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4"
	I0815 01:31:55.779800  798414 logs.go:123] Gathering logs for kube-proxy [8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5] ...
	I0815 01:31:55.779833  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5"
	I0815 01:31:55.823656  798414 logs.go:123] Gathering logs for storage-provisioner [6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414] ...
	I0815 01:31:55.823733  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414"
	I0815 01:31:55.861500  798414 logs.go:123] Gathering logs for containerd ...
	I0815 01:31:55.861528  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0815 01:31:55.921791  798414 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:55.921824  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:55.943979  798414 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:55.944021  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:31:56.104386  798414 out.go:304] Setting ErrFile to fd 2...
	I0815 01:31:56.104411  798414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 01:31:56.104602  798414 out.go:239] X Problems detected in kubelet:
	W0815 01:31:56.104620  798414 out.go:239]   Aug 15 01:31:31 old-k8s-version-145466 kubelet[664]: E0815 01:31:31.132519     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:56.104640  798414 out.go:239]   Aug 15 01:31:38 old-k8s-version-145466 kubelet[664]: E0815 01:31:38.133982     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:56.104655  798414 out.go:239]   Aug 15 01:31:42 old-k8s-version-145466 kubelet[664]: E0815 01:31:42.132790     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:31:56.104675  798414 out.go:239]   Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.132720     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:31:56.104690  798414 out.go:239]   Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.133955     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0815 01:31:56.104697  798414 out.go:304] Setting ErrFile to fd 2...
	I0815 01:31:56.104709  798414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:31:56.927528  803761 pod_ready.go:102] pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.426424  803761 pod_ready.go:81] duration metric: took 4m0.005381694s for pod "metrics-server-6867b74b74-nsksn" in "kube-system" namespace to be "Ready" ...
	E0815 01:31:57.426450  803761 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 01:31:57.426466  803761 pod_ready.go:38] duration metric: took 4m14.270979766s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:31:57.426480  803761 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:31:57.426508  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:57.426580  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:57.462402  803761 cri.go:89] found id: "46c3cc70d2d5a312e15d7057df64d22a49f37589065ccc894bc8031b4a798951"
	I0815 01:31:57.462429  803761 cri.go:89] found id: "b690b96148f2277712ec84a51bec7a82401a426f009a95180cc964bb26bea9fb"
	I0815 01:31:57.462435  803761 cri.go:89] found id: ""
	I0815 01:31:57.462443  803761 logs.go:276] 2 containers: [46c3cc70d2d5a312e15d7057df64d22a49f37589065ccc894bc8031b4a798951 b690b96148f2277712ec84a51bec7a82401a426f009a95180cc964bb26bea9fb]
	I0815 01:31:57.462534  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.466167  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.469589  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0815 01:31:57.469662  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:57.522815  803761 cri.go:89] found id: "af8ae98559be9b10288ede748b6487069e6ae8e0717a8ea5cae3f97dae2473a3"
	I0815 01:31:57.522840  803761 cri.go:89] found id: "adc2315cf56607b40babf5046072a405155130dd7cd2b6c3e46f88da9eaa92f7"
	I0815 01:31:57.522845  803761 cri.go:89] found id: ""
	I0815 01:31:57.522853  803761 logs.go:276] 2 containers: [af8ae98559be9b10288ede748b6487069e6ae8e0717a8ea5cae3f97dae2473a3 adc2315cf56607b40babf5046072a405155130dd7cd2b6c3e46f88da9eaa92f7]
	I0815 01:31:57.522944  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.526918  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.530681  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0815 01:31:57.530752  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:57.570743  803761 cri.go:89] found id: "6c08c4834233480bedf28f39119c36ba7f9f59e52e6cd70fd88af53fe3b374af"
	I0815 01:31:57.570769  803761 cri.go:89] found id: "ba8534f9d63589367129cbff9b8d4f2498b7f86093c708dd66e2ccf978b436f1"
	I0815 01:31:57.570774  803761 cri.go:89] found id: ""
	I0815 01:31:57.570782  803761 logs.go:276] 2 containers: [6c08c4834233480bedf28f39119c36ba7f9f59e52e6cd70fd88af53fe3b374af ba8534f9d63589367129cbff9b8d4f2498b7f86093c708dd66e2ccf978b436f1]
	I0815 01:31:57.570871  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.574486  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.577683  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:57.577780  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:57.617135  803761 cri.go:89] found id: "914fef8b03c5cf0aba5b06b0736a9992a3324577daf003ec30aa0830552ddd54"
	I0815 01:31:57.617157  803761 cri.go:89] found id: "d9f6521d16507f27bd6386ff17f8b963c257f216da7f467c9210fb9178524678"
	I0815 01:31:57.617162  803761 cri.go:89] found id: ""
	I0815 01:31:57.617169  803761 logs.go:276] 2 containers: [914fef8b03c5cf0aba5b06b0736a9992a3324577daf003ec30aa0830552ddd54 d9f6521d16507f27bd6386ff17f8b963c257f216da7f467c9210fb9178524678]
	I0815 01:31:57.617253  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.620788  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.624602  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:57.624717  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:57.665191  803761 cri.go:89] found id: "f5b1c582f2613d5f66426be71311ebe74f79284ef08901e9fc908368b140053d"
	I0815 01:31:57.665260  803761 cri.go:89] found id: "f3022d73668343c2effb73b4ac408bd5f9d53392de4778783676da2189835e39"
	I0815 01:31:57.665281  803761 cri.go:89] found id: ""
	I0815 01:31:57.665304  803761 logs.go:276] 2 containers: [f5b1c582f2613d5f66426be71311ebe74f79284ef08901e9fc908368b140053d f3022d73668343c2effb73b4ac408bd5f9d53392de4778783676da2189835e39]
	I0815 01:31:57.665374  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.669052  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.672595  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:57.672699  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:57.711112  803761 cri.go:89] found id: "9612b5845ae0c1e1f75da16acadeb15251654526c47e57438d92794112c4cff3"
	I0815 01:31:57.711149  803761 cri.go:89] found id: "2b053d2cfe8aa99ea78837e3b3ce749a8c262ba3833f5bc139cbaa2c7657cb22"
	I0815 01:31:57.711155  803761 cri.go:89] found id: ""
	I0815 01:31:57.711163  803761 logs.go:276] 2 containers: [9612b5845ae0c1e1f75da16acadeb15251654526c47e57438d92794112c4cff3 2b053d2cfe8aa99ea78837e3b3ce749a8c262ba3833f5bc139cbaa2c7657cb22]
	I0815 01:31:57.711241  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.714858  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.718200  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:57.718324  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:57.754894  803761 cri.go:89] found id: "01f1d14032652bdf25ee75b1524737966f86b4c47f7b6e32348b4b50e444e7e0"
	I0815 01:31:57.754937  803761 cri.go:89] found id: "5a854e83e9a5556e1a2a9298080f95e0ec2337600228c065637e73d90913c7cd"
	I0815 01:31:57.754943  803761 cri.go:89] found id: ""
	I0815 01:31:57.754951  803761 logs.go:276] 2 containers: [01f1d14032652bdf25ee75b1524737966f86b4c47f7b6e32348b4b50e444e7e0 5a854e83e9a5556e1a2a9298080f95e0ec2337600228c065637e73d90913c7cd]
	I0815 01:31:57.755016  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.759055  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.762518  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:57.762604  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:57.801833  803761 cri.go:89] found id: "10c17d67b1c99bc8c8761398268c9dce700200fe24f2d6a147a048129527ab84"
	I0815 01:31:57.801858  803761 cri.go:89] found id: ""
	I0815 01:31:57.801867  803761 logs.go:276] 1 containers: [10c17d67b1c99bc8c8761398268c9dce700200fe24f2d6a147a048129527ab84]
	I0815 01:31:57.801922  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.805384  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:31:57.805455  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:31:57.845472  803761 cri.go:89] found id: "a413edeaf5acb43b535f0f12013ec424c1692ef01fdf855f50845dd0e017228b"
	I0815 01:31:57.845494  803761 cri.go:89] found id: "c88884152535dacfc5c62be43dc4ce7ff717b81e2d387dc5c6383a1ba266b499"
	I0815 01:31:57.845500  803761 cri.go:89] found id: ""
	I0815 01:31:57.845507  803761 logs.go:276] 2 containers: [a413edeaf5acb43b535f0f12013ec424c1692ef01fdf855f50845dd0e017228b c88884152535dacfc5c62be43dc4ce7ff717b81e2d387dc5c6383a1ba266b499]
	I0815 01:31:57.845567  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.849242  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:31:57.852792  803761 logs.go:123] Gathering logs for kube-controller-manager [2b053d2cfe8aa99ea78837e3b3ce749a8c262ba3833f5bc139cbaa2c7657cb22] ...
	I0815 01:31:57.852818  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b053d2cfe8aa99ea78837e3b3ce749a8c262ba3833f5bc139cbaa2c7657cb22"
	I0815 01:31:57.914973  803761 logs.go:123] Gathering logs for kindnet [5a854e83e9a5556e1a2a9298080f95e0ec2337600228c065637e73d90913c7cd] ...
	I0815 01:31:57.915028  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a854e83e9a5556e1a2a9298080f95e0ec2337600228c065637e73d90913c7cd"
	I0815 01:31:57.983460  803761 logs.go:123] Gathering logs for storage-provisioner [c88884152535dacfc5c62be43dc4ce7ff717b81e2d387dc5c6383a1ba266b499] ...
	I0815 01:31:57.983552  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c88884152535dacfc5c62be43dc4ce7ff717b81e2d387dc5c6383a1ba266b499"
	I0815 01:31:58.037566  803761 logs.go:123] Gathering logs for container status ...
	I0815 01:31:58.037602  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:58.092420  803761 logs.go:123] Gathering logs for kube-apiserver [b690b96148f2277712ec84a51bec7a82401a426f009a95180cc964bb26bea9fb] ...
	I0815 01:31:58.092450  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b690b96148f2277712ec84a51bec7a82401a426f009a95180cc964bb26bea9fb"
	I0815 01:31:58.158190  803761 logs.go:123] Gathering logs for etcd [af8ae98559be9b10288ede748b6487069e6ae8e0717a8ea5cae3f97dae2473a3] ...
	I0815 01:31:58.158226  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af8ae98559be9b10288ede748b6487069e6ae8e0717a8ea5cae3f97dae2473a3"
	I0815 01:31:58.210070  803761 logs.go:123] Gathering logs for kube-scheduler [d9f6521d16507f27bd6386ff17f8b963c257f216da7f467c9210fb9178524678] ...
	I0815 01:31:58.210101  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9f6521d16507f27bd6386ff17f8b963c257f216da7f467c9210fb9178524678"
	I0815 01:31:58.252782  803761 logs.go:123] Gathering logs for kube-controller-manager [9612b5845ae0c1e1f75da16acadeb15251654526c47e57438d92794112c4cff3] ...
	I0815 01:31:58.252811  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9612b5845ae0c1e1f75da16acadeb15251654526c47e57438d92794112c4cff3"
	I0815 01:31:58.329436  803761 logs.go:123] Gathering logs for kindnet [01f1d14032652bdf25ee75b1524737966f86b4c47f7b6e32348b4b50e444e7e0] ...
	I0815 01:31:58.329472  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01f1d14032652bdf25ee75b1524737966f86b4c47f7b6e32348b4b50e444e7e0"
	I0815 01:31:58.419135  803761 logs.go:123] Gathering logs for storage-provisioner [a413edeaf5acb43b535f0f12013ec424c1692ef01fdf855f50845dd0e017228b] ...
	I0815 01:31:58.419170  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a413edeaf5acb43b535f0f12013ec424c1692ef01fdf855f50845dd0e017228b"
	I0815 01:31:58.463305  803761 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:58.463348  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 01:31:58.512928  803761 logs.go:138] Found kubelet problem: Aug 15 01:27:46 no-preload-891255 kubelet[655]: W0815 01:27:46.751836     655 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-891255" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-891255' and this object
	W0815 01:31:58.513184  803761 logs.go:138] Found kubelet problem: Aug 15 01:27:46 no-preload-891255 kubelet[655]: E0815 01:27:46.751954     655 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-891255\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-891255' and this object" logger="UnhandledError"
	I0815 01:31:58.547581  803761 logs.go:123] Gathering logs for etcd [adc2315cf56607b40babf5046072a405155130dd7cd2b6c3e46f88da9eaa92f7] ...
	I0815 01:31:58.547617  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adc2315cf56607b40babf5046072a405155130dd7cd2b6c3e46f88da9eaa92f7"
	I0815 01:31:58.597280  803761 logs.go:123] Gathering logs for coredns [6c08c4834233480bedf28f39119c36ba7f9f59e52e6cd70fd88af53fe3b374af] ...
	I0815 01:31:58.597310  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c08c4834233480bedf28f39119c36ba7f9f59e52e6cd70fd88af53fe3b374af"
	I0815 01:31:58.642388  803761 logs.go:123] Gathering logs for kube-scheduler [914fef8b03c5cf0aba5b06b0736a9992a3324577daf003ec30aa0830552ddd54] ...
	I0815 01:31:58.642428  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 914fef8b03c5cf0aba5b06b0736a9992a3324577daf003ec30aa0830552ddd54"
	I0815 01:31:58.682792  803761 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:58.682817  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:31:58.835519  803761 logs.go:123] Gathering logs for coredns [ba8534f9d63589367129cbff9b8d4f2498b7f86093c708dd66e2ccf978b436f1] ...
	I0815 01:31:58.835554  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8534f9d63589367129cbff9b8d4f2498b7f86093c708dd66e2ccf978b436f1"
	I0815 01:31:58.874278  803761 logs.go:123] Gathering logs for kubernetes-dashboard [10c17d67b1c99bc8c8761398268c9dce700200fe24f2d6a147a048129527ab84] ...
	I0815 01:31:58.874305  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10c17d67b1c99bc8c8761398268c9dce700200fe24f2d6a147a048129527ab84"
	I0815 01:31:58.920418  803761 logs.go:123] Gathering logs for containerd ...
	I0815 01:31:58.920447  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0815 01:31:58.991319  803761 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:58.991358  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:59.008024  803761 logs.go:123] Gathering logs for kube-apiserver [46c3cc70d2d5a312e15d7057df64d22a49f37589065ccc894bc8031b4a798951] ...
	I0815 01:31:59.008058  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46c3cc70d2d5a312e15d7057df64d22a49f37589065ccc894bc8031b4a798951"
	I0815 01:31:59.062243  803761 logs.go:123] Gathering logs for kube-proxy [f5b1c582f2613d5f66426be71311ebe74f79284ef08901e9fc908368b140053d] ...
	I0815 01:31:59.062276  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b1c582f2613d5f66426be71311ebe74f79284ef08901e9fc908368b140053d"
	I0815 01:31:59.105325  803761 logs.go:123] Gathering logs for kube-proxy [f3022d73668343c2effb73b4ac408bd5f9d53392de4778783676da2189835e39] ...
	I0815 01:31:59.105352  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3022d73668343c2effb73b4ac408bd5f9d53392de4778783676da2189835e39"
	I0815 01:31:59.161607  803761 out.go:304] Setting ErrFile to fd 2...
	I0815 01:31:59.161633  803761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 01:31:59.161680  803761 out.go:239] X Problems detected in kubelet:
	W0815 01:31:59.161690  803761 out.go:239]   Aug 15 01:27:46 no-preload-891255 kubelet[655]: W0815 01:27:46.751836     655 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-891255" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-891255' and this object
	W0815 01:31:59.161700  803761 out.go:239]   Aug 15 01:27:46 no-preload-891255 kubelet[655]: E0815 01:27:46.751954     655 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-891255\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-891255' and this object" logger="UnhandledError"
	I0815 01:31:59.161708  803761 out.go:304] Setting ErrFile to fd 2...
	I0815 01:31:59.161724  803761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:32:06.105579  798414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:06.118752  798414 api_server.go:72] duration metric: took 5m57.813041637s to wait for apiserver process to appear ...
	I0815 01:32:06.118774  798414 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:32:06.118810  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:06.118865  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:06.165617  798414 cri.go:89] found id: "5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645"
	I0815 01:32:06.165640  798414 cri.go:89] found id: "d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f"
	I0815 01:32:06.165646  798414 cri.go:89] found id: ""
	I0815 01:32:06.165653  798414 logs.go:276] 2 containers: [5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645 d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f]
	I0815 01:32:06.165708  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.169565  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.173032  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0815 01:32:06.173102  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:06.214710  798414 cri.go:89] found id: "1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43"
	I0815 01:32:06.214731  798414 cri.go:89] found id: "149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d"
	I0815 01:32:06.214735  798414 cri.go:89] found id: ""
	I0815 01:32:06.214743  798414 logs.go:276] 2 containers: [1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43 149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d]
	I0815 01:32:06.214801  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.219522  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.223511  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0815 01:32:06.223588  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:06.275179  798414 cri.go:89] found id: "08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9"
	I0815 01:32:06.275200  798414 cri.go:89] found id: "56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4"
	I0815 01:32:06.275205  798414 cri.go:89] found id: ""
	I0815 01:32:06.275212  798414 logs.go:276] 2 containers: [08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9 56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4]
	I0815 01:32:06.275271  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.279316  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.283547  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:06.283635  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:06.325027  798414 cri.go:89] found id: "ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8"
	I0815 01:32:06.325047  798414 cri.go:89] found id: "f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d"
	I0815 01:32:06.325051  798414 cri.go:89] found id: ""
	I0815 01:32:06.325059  798414 logs.go:276] 2 containers: [ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8 f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d]
	I0815 01:32:06.325114  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.328921  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.332566  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:06.332658  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:06.374132  798414 cri.go:89] found id: "65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418"
	I0815 01:32:06.374165  798414 cri.go:89] found id: "8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5"
	I0815 01:32:06.374173  798414 cri.go:89] found id: ""
	I0815 01:32:06.374184  798414 logs.go:276] 2 containers: [65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418 8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5]
	I0815 01:32:06.374246  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.379778  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.383769  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:06.383887  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:06.424513  798414 cri.go:89] found id: "30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd"
	I0815 01:32:06.424585  798414 cri.go:89] found id: "d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f"
	I0815 01:32:06.424598  798414 cri.go:89] found id: ""
	I0815 01:32:06.424607  798414 logs.go:276] 2 containers: [30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f]
	I0815 01:32:06.424671  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.428875  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.433119  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:06.433275  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:06.478733  798414 cri.go:89] found id: "6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b"
	I0815 01:32:06.478760  798414 cri.go:89] found id: "8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1"
	I0815 01:32:06.478767  798414 cri.go:89] found id: ""
	I0815 01:32:06.478775  798414 logs.go:276] 2 containers: [6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b 8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1]
	I0815 01:32:06.478845  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.482927  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.486698  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:06.486788  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:06.536723  798414 cri.go:89] found id: "666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6"
	I0815 01:32:06.536748  798414 cri.go:89] found id: ""
	I0815 01:32:06.536757  798414 logs.go:276] 1 containers: [666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6]
	I0815 01:32:06.536832  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.540620  798414 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:32:06.540726  798414 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:32:06.577792  798414 cri.go:89] found id: "fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3"
	I0815 01:32:06.577814  798414 cri.go:89] found id: "6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414"
	I0815 01:32:06.577820  798414 cri.go:89] found id: ""
	I0815 01:32:06.577827  798414 logs.go:276] 2 containers: [fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3 6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414]
	I0815 01:32:06.577881  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.581525  798414 ssh_runner.go:195] Run: which crictl
	I0815 01:32:06.585154  798414 logs.go:123] Gathering logs for kube-proxy [65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418] ...
	I0815 01:32:06.585180  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418"
	I0815 01:32:06.629579  798414 logs.go:123] Gathering logs for kubernetes-dashboard [666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6] ...
	I0815 01:32:06.629604  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6"
	I0815 01:32:06.671944  798414 logs.go:123] Gathering logs for storage-provisioner [fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3] ...
	I0815 01:32:06.671982  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3"
	I0815 01:32:06.713621  798414 logs.go:123] Gathering logs for container status ...
	I0815 01:32:06.713664  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:06.771192  798414 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:06.771223  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:32:06.916807  798414 logs.go:123] Gathering logs for coredns [56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4] ...
	I0815 01:32:06.916881  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4"
	I0815 01:32:06.967152  798414 logs.go:123] Gathering logs for kube-scheduler [ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8] ...
	I0815 01:32:06.967233  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8"
	I0815 01:32:07.009506  798414 logs.go:123] Gathering logs for kube-proxy [8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5] ...
	I0815 01:32:07.009537  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5"
	I0815 01:32:07.049321  798414 logs.go:123] Gathering logs for containerd ...
	I0815 01:32:07.049360  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0815 01:32:07.113454  798414 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:07.113494  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 01:32:07.177310  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.206496     664 reflector.go:138] object-"kube-system"/"coredns-token-hqq2w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-hqq2w" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.177533  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.206838     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.177748  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207185     664 reflector.go:138] object-"kube-system"/"kindnet-token-pflml": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-pflml" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.177967  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207498     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-rl52n": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-rl52n" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.178175  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.207678     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.178488  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.264938     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-k6pcd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-k6pcd" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.178700  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:28 old-k8s-version-145466 kubelet[664]: E0815 01:26:28.114283     664 reflector.go:138] object-"default"/"default-token-j4wgn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-j4wgn" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.181924  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:29 old-k8s-version-145466 kubelet[664]: E0815 01:26:29.390721     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.183276  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:29 old-k8s-version-145466 kubelet[664]: E0815 01:26:29.415771     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.186634  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:42 old-k8s-version-145466 kubelet[664]: E0815 01:26:42.146871     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.187378  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:49 old-k8s-version-145466 kubelet[664]: E0815 01:26:49.166497     664 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-fksp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-fksp6" is forbidden: User "system:node:old-k8s-version-145466" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-145466' and this object
	W0815 01:32:07.188913  798414 logs.go:138] Found kubelet problem: Aug 15 01:26:57 old-k8s-version-145466 kubelet[664]: E0815 01:26:57.141841     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.189372  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:00 old-k8s-version-145466 kubelet[664]: E0815 01:27:00.751687     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.189828  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:01 old-k8s-version-145466 kubelet[664]: E0815 01:27:01.756479     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.190265  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:02 old-k8s-version-145466 kubelet[664]: E0815 01:27:02.761739     664 pod_workers.go:191] Error syncing pod d7e84c38-c90e-427c-bfc5-45adf788d6fe ("storage-provisioner_kube-system(d7e84c38-c90e-427c-bfc5-45adf788d6fe)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d7e84c38-c90e-427c-bfc5-45adf788d6fe)"
	W0815 01:32:07.190931  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:08 old-k8s-version-145466 kubelet[664]: E0815 01:27:08.528394     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.193356  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:11 old-k8s-version-145466 kubelet[664]: E0815 01:27:11.140512     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.194070  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:20 old-k8s-version-145466 kubelet[664]: E0815 01:27:20.809846     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.194257  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:24 old-k8s-version-145466 kubelet[664]: E0815 01:27:24.133897     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.194585  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:28 old-k8s-version-145466 kubelet[664]: E0815 01:27:28.528406     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.194768  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:38 old-k8s-version-145466 kubelet[664]: E0815 01:27:38.186907     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.195356  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:42 old-k8s-version-145466 kubelet[664]: E0815 01:27:42.877822     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.195681  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:48 old-k8s-version-145466 kubelet[664]: E0815 01:27:48.529042     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.198128  798414 logs.go:138] Found kubelet problem: Aug 15 01:27:53 old-k8s-version-145466 kubelet[664]: E0815 01:27:53.141943     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.198456  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:02 old-k8s-version-145466 kubelet[664]: E0815 01:28:02.132198     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.198668  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:04 old-k8s-version-145466 kubelet[664]: E0815 01:28:04.133346     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.198852  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:16 old-k8s-version-145466 kubelet[664]: E0815 01:28:16.144812     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.199177  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:17 old-k8s-version-145466 kubelet[664]: E0815 01:28:17.132160     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.199766  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:29 old-k8s-version-145466 kubelet[664]: E0815 01:28:29.028736     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.199975  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:31 old-k8s-version-145466 kubelet[664]: E0815 01:28:31.132458     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.200308  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:38 old-k8s-version-145466 kubelet[664]: E0815 01:28:38.528400     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.200493  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:44 old-k8s-version-145466 kubelet[664]: E0815 01:28:44.132479     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.200817  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:51 old-k8s-version-145466 kubelet[664]: E0815 01:28:51.133407     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.201005  798414 logs.go:138] Found kubelet problem: Aug 15 01:28:56 old-k8s-version-145466 kubelet[664]: E0815 01:28:56.135291     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.201329  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:04 old-k8s-version-145466 kubelet[664]: E0815 01:29:04.132344     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.201512  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:10 old-k8s-version-145466 kubelet[664]: E0815 01:29:10.132536     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.201837  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:17 old-k8s-version-145466 kubelet[664]: E0815 01:29:17.132162     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.204300  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:23 old-k8s-version-145466 kubelet[664]: E0815 01:29:23.139752     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.204628  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:28 old-k8s-version-145466 kubelet[664]: E0815 01:29:28.139064     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.204812  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:34 old-k8s-version-145466 kubelet[664]: E0815 01:29:34.132664     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.205154  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:40 old-k8s-version-145466 kubelet[664]: E0815 01:29:40.133096     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.205337  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:49 old-k8s-version-145466 kubelet[664]: E0815 01:29:49.133068     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.205925  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:51 old-k8s-version-145466 kubelet[664]: E0815 01:29:51.281771     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.206255  798414 logs.go:138] Found kubelet problem: Aug 15 01:29:58 old-k8s-version-145466 kubelet[664]: E0815 01:29:58.528848     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.206438  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:00 old-k8s-version-145466 kubelet[664]: E0815 01:30:00.160446     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.206767  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:10 old-k8s-version-145466 kubelet[664]: E0815 01:30:10.135730     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.206949  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:11 old-k8s-version-145466 kubelet[664]: E0815 01:30:11.132453     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.207272  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:22 old-k8s-version-145466 kubelet[664]: E0815 01:30:22.132670     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.207454  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:26 old-k8s-version-145466 kubelet[664]: E0815 01:30:26.135386     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.207781  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:33 old-k8s-version-145466 kubelet[664]: E0815 01:30:33.132520     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.207971  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:41 old-k8s-version-145466 kubelet[664]: E0815 01:30:41.132604     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.208297  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:45 old-k8s-version-145466 kubelet[664]: E0815 01:30:45.132399     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.208479  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:54 old-k8s-version-145466 kubelet[664]: E0815 01:30:54.132932     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.208803  798414 logs.go:138] Found kubelet problem: Aug 15 01:30:56 old-k8s-version-145466 kubelet[664]: E0815 01:30:56.132938     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.208985  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:06 old-k8s-version-145466 kubelet[664]: E0815 01:31:06.132954     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.209308  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:11 old-k8s-version-145466 kubelet[664]: E0815 01:31:11.132193     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.209490  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:20 old-k8s-version-145466 kubelet[664]: E0815 01:31:20.132734     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.209814  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:23 old-k8s-version-145466 kubelet[664]: E0815 01:31:23.132188     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.209997  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:31 old-k8s-version-145466 kubelet[664]: E0815 01:31:31.132519     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.210321  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:38 old-k8s-version-145466 kubelet[664]: E0815 01:31:38.133982     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.210506  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:42 old-k8s-version-145466 kubelet[664]: E0815 01:31:42.132790     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.210833  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.132720     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.211019  798414 logs.go:138] Found kubelet problem: Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.133955     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.213451  798414 logs.go:138] Found kubelet problem: Aug 15 01:32:05 old-k8s-version-145466 kubelet[664]: E0815 01:32:05.164689     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.213780  798414 logs.go:138] Found kubelet problem: Aug 15 01:32:06 old-k8s-version-145466 kubelet[664]: E0815 01:32:06.132574     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	I0815 01:32:07.213790  798414 logs.go:123] Gathering logs for etcd [149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d] ...
	I0815 01:32:07.213804  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d"
	I0815 01:32:07.268209  798414 logs.go:123] Gathering logs for coredns [08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9] ...
	I0815 01:32:07.268240  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9"
	I0815 01:32:07.312267  798414 logs.go:123] Gathering logs for kube-scheduler [f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d] ...
	I0815 01:32:07.312297  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d"
	I0815 01:32:07.353856  798414 logs.go:123] Gathering logs for kube-controller-manager [d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f] ...
	I0815 01:32:07.353886  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f"
	I0815 01:32:07.412945  798414 logs.go:123] Gathering logs for kindnet [6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b] ...
	I0815 01:32:07.412980  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b"
	I0815 01:32:07.480680  798414 logs.go:123] Gathering logs for etcd [1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43] ...
	I0815 01:32:07.480758  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43"
	I0815 01:32:07.557927  798414 logs.go:123] Gathering logs for kube-controller-manager [30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd] ...
	I0815 01:32:07.557960  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd"
	I0815 01:32:07.614762  798414 logs.go:123] Gathering logs for kindnet [8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1] ...
	I0815 01:32:07.614799  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1"
	I0815 01:32:07.664776  798414 logs.go:123] Gathering logs for storage-provisioner [6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414] ...
	I0815 01:32:07.664822  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414"
	I0815 01:32:07.703095  798414 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:07.703130  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:07.722145  798414 logs.go:123] Gathering logs for kube-apiserver [5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645] ...
	I0815 01:32:07.722176  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645"
	I0815 01:32:07.778182  798414 logs.go:123] Gathering logs for kube-apiserver [d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f] ...
	I0815 01:32:07.778222  798414 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f"
	I0815 01:32:07.835453  798414 out.go:304] Setting ErrFile to fd 2...
	I0815 01:32:07.835484  798414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 01:32:07.835549  798414 out.go:239] X Problems detected in kubelet:
	W0815 01:32:07.835563  798414 out.go:239]   Aug 15 01:31:42 old-k8s-version-145466 kubelet[664]: E0815 01:31:42.132790     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.835572  798414 out.go:239]   Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.132720     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	W0815 01:32:07.835580  798414 out.go:239]   Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.133955     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 01:32:07.835587  798414 out.go:239]   Aug 15 01:32:05 old-k8s-version-145466 kubelet[664]: E0815 01:32:05.164689     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0815 01:32:07.835601  798414 out.go:239]   Aug 15 01:32:06 old-k8s-version-145466 kubelet[664]: E0815 01:32:06.132574     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	I0815 01:32:07.835607  798414 out.go:304] Setting ErrFile to fd 2...
	I0815 01:32:07.835621  798414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:32:09.162660  803761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:09.176219  803761 api_server.go:72] duration metric: took 4m31.625848926s to wait for apiserver process to appear ...
	I0815 01:32:09.176246  803761 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:32:09.176282  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:09.176342  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:09.225626  803761 cri.go:89] found id: "46c3cc70d2d5a312e15d7057df64d22a49f37589065ccc894bc8031b4a798951"
	I0815 01:32:09.225648  803761 cri.go:89] found id: "b690b96148f2277712ec84a51bec7a82401a426f009a95180cc964bb26bea9fb"
	I0815 01:32:09.225654  803761 cri.go:89] found id: ""
	I0815 01:32:09.225661  803761 logs.go:276] 2 containers: [46c3cc70d2d5a312e15d7057df64d22a49f37589065ccc894bc8031b4a798951 b690b96148f2277712ec84a51bec7a82401a426f009a95180cc964bb26bea9fb]
	I0815 01:32:09.225718  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.231370  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.234938  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0815 01:32:09.235010  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:09.273734  803761 cri.go:89] found id: "af8ae98559be9b10288ede748b6487069e6ae8e0717a8ea5cae3f97dae2473a3"
	I0815 01:32:09.273758  803761 cri.go:89] found id: "adc2315cf56607b40babf5046072a405155130dd7cd2b6c3e46f88da9eaa92f7"
	I0815 01:32:09.273762  803761 cri.go:89] found id: ""
	I0815 01:32:09.273770  803761 logs.go:276] 2 containers: [af8ae98559be9b10288ede748b6487069e6ae8e0717a8ea5cae3f97dae2473a3 adc2315cf56607b40babf5046072a405155130dd7cd2b6c3e46f88da9eaa92f7]
	I0815 01:32:09.273826  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.278872  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.284237  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0815 01:32:09.284313  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:09.325430  803761 cri.go:89] found id: "6c08c4834233480bedf28f39119c36ba7f9f59e52e6cd70fd88af53fe3b374af"
	I0815 01:32:09.325455  803761 cri.go:89] found id: "ba8534f9d63589367129cbff9b8d4f2498b7f86093c708dd66e2ccf978b436f1"
	I0815 01:32:09.325461  803761 cri.go:89] found id: ""
	I0815 01:32:09.325468  803761 logs.go:276] 2 containers: [6c08c4834233480bedf28f39119c36ba7f9f59e52e6cd70fd88af53fe3b374af ba8534f9d63589367129cbff9b8d4f2498b7f86093c708dd66e2ccf978b436f1]
	I0815 01:32:09.325523  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.329072  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.332735  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:09.332857  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:09.373294  803761 cri.go:89] found id: "914fef8b03c5cf0aba5b06b0736a9992a3324577daf003ec30aa0830552ddd54"
	I0815 01:32:09.373319  803761 cri.go:89] found id: "d9f6521d16507f27bd6386ff17f8b963c257f216da7f467c9210fb9178524678"
	I0815 01:32:09.373325  803761 cri.go:89] found id: ""
	I0815 01:32:09.373332  803761 logs.go:276] 2 containers: [914fef8b03c5cf0aba5b06b0736a9992a3324577daf003ec30aa0830552ddd54 d9f6521d16507f27bd6386ff17f8b963c257f216da7f467c9210fb9178524678]
	I0815 01:32:09.373389  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.377328  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.380946  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:09.381027  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:09.423505  803761 cri.go:89] found id: "f5b1c582f2613d5f66426be71311ebe74f79284ef08901e9fc908368b140053d"
	I0815 01:32:09.423533  803761 cri.go:89] found id: "f3022d73668343c2effb73b4ac408bd5f9d53392de4778783676da2189835e39"
	I0815 01:32:09.423538  803761 cri.go:89] found id: ""
	I0815 01:32:09.423546  803761 logs.go:276] 2 containers: [f5b1c582f2613d5f66426be71311ebe74f79284ef08901e9fc908368b140053d f3022d73668343c2effb73b4ac408bd5f9d53392de4778783676da2189835e39]
	I0815 01:32:09.423601  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.427228  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.430939  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:09.431047  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:09.469187  803761 cri.go:89] found id: "9612b5845ae0c1e1f75da16acadeb15251654526c47e57438d92794112c4cff3"
	I0815 01:32:09.469217  803761 cri.go:89] found id: "2b053d2cfe8aa99ea78837e3b3ce749a8c262ba3833f5bc139cbaa2c7657cb22"
	I0815 01:32:09.469222  803761 cri.go:89] found id: ""
	I0815 01:32:09.469230  803761 logs.go:276] 2 containers: [9612b5845ae0c1e1f75da16acadeb15251654526c47e57438d92794112c4cff3 2b053d2cfe8aa99ea78837e3b3ce749a8c262ba3833f5bc139cbaa2c7657cb22]
	I0815 01:32:09.469287  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.473210  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.476881  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:09.477023  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:09.526236  803761 cri.go:89] found id: "01f1d14032652bdf25ee75b1524737966f86b4c47f7b6e32348b4b50e444e7e0"
	I0815 01:32:09.526303  803761 cri.go:89] found id: "5a854e83e9a5556e1a2a9298080f95e0ec2337600228c065637e73d90913c7cd"
	I0815 01:32:09.526325  803761 cri.go:89] found id: ""
	I0815 01:32:09.526353  803761 logs.go:276] 2 containers: [01f1d14032652bdf25ee75b1524737966f86b4c47f7b6e32348b4b50e444e7e0 5a854e83e9a5556e1a2a9298080f95e0ec2337600228c065637e73d90913c7cd]
	I0815 01:32:09.526440  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.530392  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.534110  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:09.534181  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:09.586994  803761 cri.go:89] found id: "10c17d67b1c99bc8c8761398268c9dce700200fe24f2d6a147a048129527ab84"
	I0815 01:32:09.587067  803761 cri.go:89] found id: ""
	I0815 01:32:09.587089  803761 logs.go:276] 1 containers: [10c17d67b1c99bc8c8761398268c9dce700200fe24f2d6a147a048129527ab84]
	I0815 01:32:09.587183  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.590783  803761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:32:09.590855  803761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:32:09.630342  803761 cri.go:89] found id: "a413edeaf5acb43b535f0f12013ec424c1692ef01fdf855f50845dd0e017228b"
	I0815 01:32:09.630415  803761 cri.go:89] found id: "c88884152535dacfc5c62be43dc4ce7ff717b81e2d387dc5c6383a1ba266b499"
	I0815 01:32:09.630427  803761 cri.go:89] found id: ""
	I0815 01:32:09.630436  803761 logs.go:276] 2 containers: [a413edeaf5acb43b535f0f12013ec424c1692ef01fdf855f50845dd0e017228b c88884152535dacfc5c62be43dc4ce7ff717b81e2d387dc5c6383a1ba266b499]
	I0815 01:32:09.630535  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.634191  803761 ssh_runner.go:195] Run: which crictl
	I0815 01:32:09.637544  803761 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:09.637616  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 01:32:09.685304  803761 logs.go:138] Found kubelet problem: Aug 15 01:27:46 no-preload-891255 kubelet[655]: W0815 01:27:46.751836     655 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-891255" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-891255' and this object
	W0815 01:32:09.685590  803761 logs.go:138] Found kubelet problem: Aug 15 01:27:46 no-preload-891255 kubelet[655]: E0815 01:27:46.751954     655 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-891255\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-891255' and this object" logger="UnhandledError"
	I0815 01:32:09.720692  803761 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:09.720730  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:09.738598  803761 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:09.738633  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:32:09.874914  803761 logs.go:123] Gathering logs for kube-apiserver [46c3cc70d2d5a312e15d7057df64d22a49f37589065ccc894bc8031b4a798951] ...
	I0815 01:32:09.874946  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46c3cc70d2d5a312e15d7057df64d22a49f37589065ccc894bc8031b4a798951"
	I0815 01:32:09.936346  803761 logs.go:123] Gathering logs for etcd [af8ae98559be9b10288ede748b6487069e6ae8e0717a8ea5cae3f97dae2473a3] ...
	I0815 01:32:09.936385  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af8ae98559be9b10288ede748b6487069e6ae8e0717a8ea5cae3f97dae2473a3"
	I0815 01:32:09.987047  803761 logs.go:123] Gathering logs for coredns [ba8534f9d63589367129cbff9b8d4f2498b7f86093c708dd66e2ccf978b436f1] ...
	I0815 01:32:09.987082  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba8534f9d63589367129cbff9b8d4f2498b7f86093c708dd66e2ccf978b436f1"
	I0815 01:32:10.038745  803761 logs.go:123] Gathering logs for kube-controller-manager [9612b5845ae0c1e1f75da16acadeb15251654526c47e57438d92794112c4cff3] ...
	I0815 01:32:10.038779  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9612b5845ae0c1e1f75da16acadeb15251654526c47e57438d92794112c4cff3"
	I0815 01:32:10.122899  803761 logs.go:123] Gathering logs for kubernetes-dashboard [10c17d67b1c99bc8c8761398268c9dce700200fe24f2d6a147a048129527ab84] ...
	I0815 01:32:10.122937  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10c17d67b1c99bc8c8761398268c9dce700200fe24f2d6a147a048129527ab84"
	I0815 01:32:10.174661  803761 logs.go:123] Gathering logs for storage-provisioner [c88884152535dacfc5c62be43dc4ce7ff717b81e2d387dc5c6383a1ba266b499] ...
	I0815 01:32:10.174698  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c88884152535dacfc5c62be43dc4ce7ff717b81e2d387dc5c6383a1ba266b499"
	I0815 01:32:10.222050  803761 logs.go:123] Gathering logs for kube-proxy [f3022d73668343c2effb73b4ac408bd5f9d53392de4778783676da2189835e39] ...
	I0815 01:32:10.222092  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3022d73668343c2effb73b4ac408bd5f9d53392de4778783676da2189835e39"
	I0815 01:32:10.267800  803761 logs.go:123] Gathering logs for kube-controller-manager [2b053d2cfe8aa99ea78837e3b3ce749a8c262ba3833f5bc139cbaa2c7657cb22] ...
	I0815 01:32:10.267912  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b053d2cfe8aa99ea78837e3b3ce749a8c262ba3833f5bc139cbaa2c7657cb22"
	I0815 01:32:10.326327  803761 logs.go:123] Gathering logs for kube-apiserver [b690b96148f2277712ec84a51bec7a82401a426f009a95180cc964bb26bea9fb] ...
	I0815 01:32:10.326360  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b690b96148f2277712ec84a51bec7a82401a426f009a95180cc964bb26bea9fb"
	I0815 01:32:10.395215  803761 logs.go:123] Gathering logs for etcd [adc2315cf56607b40babf5046072a405155130dd7cd2b6c3e46f88da9eaa92f7] ...
	I0815 01:32:10.395254  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adc2315cf56607b40babf5046072a405155130dd7cd2b6c3e46f88da9eaa92f7"
	I0815 01:32:10.441696  803761 logs.go:123] Gathering logs for kube-scheduler [914fef8b03c5cf0aba5b06b0736a9992a3324577daf003ec30aa0830552ddd54] ...
	I0815 01:32:10.441729  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 914fef8b03c5cf0aba5b06b0736a9992a3324577daf003ec30aa0830552ddd54"
	I0815 01:32:10.482218  803761 logs.go:123] Gathering logs for kube-scheduler [d9f6521d16507f27bd6386ff17f8b963c257f216da7f467c9210fb9178524678] ...
	I0815 01:32:10.482246  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9f6521d16507f27bd6386ff17f8b963c257f216da7f467c9210fb9178524678"
	I0815 01:32:10.535521  803761 logs.go:123] Gathering logs for kube-proxy [f5b1c582f2613d5f66426be71311ebe74f79284ef08901e9fc908368b140053d] ...
	I0815 01:32:10.535598  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5b1c582f2613d5f66426be71311ebe74f79284ef08901e9fc908368b140053d"
	I0815 01:32:10.577446  803761 logs.go:123] Gathering logs for containerd ...
	I0815 01:32:10.577474  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0815 01:32:10.638110  803761 logs.go:123] Gathering logs for coredns [6c08c4834233480bedf28f39119c36ba7f9f59e52e6cd70fd88af53fe3b374af] ...
	I0815 01:32:10.638146  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c08c4834233480bedf28f39119c36ba7f9f59e52e6cd70fd88af53fe3b374af"
	I0815 01:32:10.680386  803761 logs.go:123] Gathering logs for kindnet [01f1d14032652bdf25ee75b1524737966f86b4c47f7b6e32348b4b50e444e7e0] ...
	I0815 01:32:10.680415  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01f1d14032652bdf25ee75b1524737966f86b4c47f7b6e32348b4b50e444e7e0"
	I0815 01:32:10.740104  803761 logs.go:123] Gathering logs for kindnet [5a854e83e9a5556e1a2a9298080f95e0ec2337600228c065637e73d90913c7cd] ...
	I0815 01:32:10.740143  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a854e83e9a5556e1a2a9298080f95e0ec2337600228c065637e73d90913c7cd"
	I0815 01:32:10.785124  803761 logs.go:123] Gathering logs for storage-provisioner [a413edeaf5acb43b535f0f12013ec424c1692ef01fdf855f50845dd0e017228b] ...
	I0815 01:32:10.785153  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a413edeaf5acb43b535f0f12013ec424c1692ef01fdf855f50845dd0e017228b"
	I0815 01:32:10.828865  803761 logs.go:123] Gathering logs for container status ...
	I0815 01:32:10.828897  803761 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:10.876248  803761 out.go:304] Setting ErrFile to fd 2...
	I0815 01:32:10.876277  803761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 01:32:10.876372  803761 out.go:239] X Problems detected in kubelet:
	W0815 01:32:10.876387  803761 out.go:239]   Aug 15 01:27:46 no-preload-891255 kubelet[655]: W0815 01:27:46.751836     655 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-891255" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-891255' and this object
	W0815 01:32:10.876423  803761 out.go:239]   Aug 15 01:27:46 no-preload-891255 kubelet[655]: E0815 01:27:46.751954     655 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-891255\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-891255' and this object" logger="UnhandledError"
	I0815 01:32:10.876443  803761 out.go:304] Setting ErrFile to fd 2...
	I0815 01:32:10.876450  803761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:32:17.836740  798414 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0815 01:32:17.849082  798414 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0815 01:32:17.851627  798414 out.go:177] 
	W0815 01:32:17.854581  798414 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0815 01:32:17.854634  798414 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0815 01:32:17.854660  798414 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0815 01:32:17.854666  798414 out.go:239] * 
	W0815 01:32:17.856250  798414 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:32:17.863608  798414 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	ceecc0a151db1       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   256358d254fa2       dashboard-metrics-scraper-8d5bb5db8-g4fn2
	fe2ee809aa8dd       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   f61c6a02b1206       storage-provisioner
	666623d148109       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   d1a327b708080       kubernetes-dashboard-cd95d586-9rfcm
	6d60100a03bf0       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   f61c6a02b1206       storage-provisioner
	05adb0e367237       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   3ce93744f731a       busybox
	65046ef81eb0e       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   a8eb6473a66d9       kube-proxy-hdj25
	08e5b39842980       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   5b60a432208b4       coredns-74ff55c5b-sc7dc
	6741807183f4b       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   d358b0afd2e3b       kindnet-tjp7r
	ff181601fe7c6       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   afcfd312c5f71       kube-scheduler-old-k8s-version-145466
	5733bb4495fb8       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   960d382659d6f       kube-apiserver-old-k8s-version-145466
	30a64d5ca6b72       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   d82bcfd9d2fb2       kube-controller-manager-old-k8s-version-145466
	1b6894d83d05f       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   6ec154575c292       etcd-old-k8s-version-145466
	a16f69b01656d       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   c5c25271141c3       busybox
	56c5221502ba9       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   8a4283617db0b       coredns-74ff55c5b-sc7dc
	8217cbe4e252e       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   6c20dde772567       kindnet-tjp7r
	8f785a4ea8b90       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   2d21edfe13d78       kube-proxy-hdj25
	d14132b610c98       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   6b9c935ace106       kube-apiserver-old-k8s-version-145466
	f5298f1e3fd8d       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   6950503ba4c3c       kube-scheduler-old-k8s-version-145466
	149a435a4cfc7       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   cc28364af1257       etcd-old-k8s-version-145466
	d52d38339364a       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   7dffe829ec7ec       kube-controller-manager-old-k8s-version-145466
	
	
	==> containerd <==
	Aug 15 01:28:28 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:28:28.158270676Z" level=info msg="CreateContainer within sandbox \"256358d254fa2d5a65f15e6d5458cbedb81cc3040ccfaf81be360cb5755c6353\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"69aa712e81322b2e2af7b2cfe18455615127bb9d18cb3e11ed7bbbb5bb146898\""
	Aug 15 01:28:28 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:28:28.159493352Z" level=info msg="StartContainer for \"69aa712e81322b2e2af7b2cfe18455615127bb9d18cb3e11ed7bbbb5bb146898\""
	Aug 15 01:28:28 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:28:28.245170590Z" level=info msg="StartContainer for \"69aa712e81322b2e2af7b2cfe18455615127bb9d18cb3e11ed7bbbb5bb146898\" returns successfully"
	Aug 15 01:28:28 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:28:28.276801621Z" level=info msg="shim disconnected" id=69aa712e81322b2e2af7b2cfe18455615127bb9d18cb3e11ed7bbbb5bb146898 namespace=k8s.io
	Aug 15 01:28:28 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:28:28.276862363Z" level=warning msg="cleaning up after shim disconnected" id=69aa712e81322b2e2af7b2cfe18455615127bb9d18cb3e11ed7bbbb5bb146898 namespace=k8s.io
	Aug 15 01:28:28 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:28:28.276872636Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 15 01:28:29 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:28:29.030245271Z" level=info msg="RemoveContainer for \"6249a06c2078fc3dad88b4b501240da141283597db8273aa323f219f3d2a0f6c\""
	Aug 15 01:28:29 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:28:29.037200996Z" level=info msg="RemoveContainer for \"6249a06c2078fc3dad88b4b501240da141283597db8273aa323f219f3d2a0f6c\" returns successfully"
	Aug 15 01:29:23 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:23.133462632Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 01:29:23 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:23.138092356Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 15 01:29:23 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:23.139284624Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 15 01:29:23 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:23.139394704Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 15 01:29:51 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:51.134179879Z" level=info msg="CreateContainer within sandbox \"256358d254fa2d5a65f15e6d5458cbedb81cc3040ccfaf81be360cb5755c6353\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Aug 15 01:29:51 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:51.149054836Z" level=info msg="CreateContainer within sandbox \"256358d254fa2d5a65f15e6d5458cbedb81cc3040ccfaf81be360cb5755c6353\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87\""
	Aug 15 01:29:51 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:51.149719031Z" level=info msg="StartContainer for \"ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87\""
	Aug 15 01:29:51 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:51.212546873Z" level=info msg="StartContainer for \"ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87\" returns successfully"
	Aug 15 01:29:51 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:51.238538797Z" level=info msg="shim disconnected" id=ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87 namespace=k8s.io
	Aug 15 01:29:51 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:51.238721139Z" level=warning msg="cleaning up after shim disconnected" id=ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87 namespace=k8s.io
	Aug 15 01:29:51 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:51.238746075Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 15 01:29:51 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:51.283263925Z" level=info msg="RemoveContainer for \"69aa712e81322b2e2af7b2cfe18455615127bb9d18cb3e11ed7bbbb5bb146898\""
	Aug 15 01:29:51 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:29:51.289561763Z" level=info msg="RemoveContainer for \"69aa712e81322b2e2af7b2cfe18455615127bb9d18cb3e11ed7bbbb5bb146898\" returns successfully"
	Aug 15 01:32:05 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:32:05.133004407Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 01:32:05 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:32:05.161963823Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 15 01:32:05 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:32:05.163791174Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 15 01:32:05 old-k8s-version-145466 containerd[571]: time="2024-08-15T01:32:05.163899391Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [08e5b398429806625e2d4b21b843272b9863738aaf76eccca2fdcd7f138d10d9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:37783 - 29628 "HINFO IN 578344926777194319.6772010912184897932. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.019690577s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0815 01:27:02.393855       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-15 01:26:32.392745126 +0000 UTC m=+0.048514107) (total time: 30.00099711s):
	Trace[2019727887]: [30.00099711s] [30.00099711s] END
	E0815 01:27:02.393884       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0815 01:27:02.395346       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-15 01:26:32.394938618 +0000 UTC m=+0.050707599) (total time: 30.000386274s):
	Trace[939984059]: [30.000386274s] [30.000386274s] END
	E0815 01:27:02.395360       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0815 01:27:02.395768       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-15 01:26:32.395283199 +0000 UTC m=+0.051052180) (total time: 30.00046958s):
	Trace[1474941318]: [30.00046958s] [30.00046958s] END
	E0815 01:27:02.395791       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [56c5221502ba92aed6a2e6121e471828798a80e497e1f649f9110eb04afaffe4] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:43232 - 7411 "HINFO IN 3400014083538877995.5734621554272673838. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011882298s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-145466
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-145466
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=old-k8s-version-145466
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T01_23_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 01:23:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-145466
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:32:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:27:19 +0000   Thu, 15 Aug 2024 01:23:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:27:19 +0000   Thu, 15 Aug 2024 01:23:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:27:19 +0000   Thu, 15 Aug 2024 01:23:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:27:19 +0000   Thu, 15 Aug 2024 01:24:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-145466
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebd81c19db5d488baf92b14afd5371dd
	  System UUID:                fb0de378-2e0e-4b19-ac82-306e2a38f206
	  Boot ID:                    ea2065b4-362f-4442-9b74-bf31c8d731d6
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 coredns-74ff55c5b-sc7dc                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m19s
	  kube-system                 etcd-old-k8s-version-145466                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m25s
	  kube-system                 kindnet-tjp7r                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m19s
	  kube-system                 kube-apiserver-old-k8s-version-145466             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-controller-manager-old-k8s-version-145466    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-proxy-hdj25                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m19s
	  kube-system                 kube-scheduler-old-k8s-version-145466             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 metrics-server-9975d5f86-qvcw4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m33s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-g4fn2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-9rfcm               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m45s (x5 over 8m45s)  kubelet     Node old-k8s-version-145466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m45s (x4 over 8m45s)  kubelet     Node old-k8s-version-145466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m45s (x4 over 8m45s)  kubelet     Node old-k8s-version-145466 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m26s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m26s                  kubelet     Node old-k8s-version-145466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m26s                  kubelet     Node old-k8s-version-145466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m26s                  kubelet     Node old-k8s-version-145466 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m26s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m19s                  kubelet     Node old-k8s-version-145466 status is now: NodeReady
	  Normal  Starting                 8m18s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m4s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m3s)    kubelet     Node old-k8s-version-145466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m3s)    kubelet     Node old-k8s-version-145466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m3s)    kubelet     Node old-k8s-version-145466 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [149a435a4cfc7f9897ab3f6a5b272f0e4fc49d2f9292ac33c52ef454a4a9446d] <==
	2024-08-15 01:23:35.718141 I | etcdserver/membership: added member 9f0758e1c58a86ed [https://192.168.85.2:2380] to cluster 68eaea490fab4e05
	raft2024/08/15 01:23:36 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/08/15 01:23:36 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/08/15 01:23:36 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/08/15 01:23:36 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/08/15 01:23:36 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-08-15 01:23:36.670519 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-15 01:23:36.675225 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-15 01:23:36.675290 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-15 01:23:36.675316 I | etcdserver: published {Name:old-k8s-version-145466 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-08-15 01:23:36.675493 I | embed: ready to serve client requests
	2024-08-15 01:23:36.677185 I | embed: serving client requests on 192.168.85.2:2379
	2024-08-15 01:23:36.677392 I | embed: ready to serve client requests
	2024-08-15 01:23:36.684115 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-15 01:24:03.047221 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:24:10.582093 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:24:20.582006 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:24:30.582062 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:24:40.582030 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:24:50.582166 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:25:00.582125 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:25:10.582141 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:25:20.582171 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:25:30.582225 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:25:40.582270 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [1b6894d83d05f469576beb2ef6554bcad7b2283941fd3fa12f5c9ce2de192c43] <==
	2024-08-15 01:28:10.428912 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:28:20.428777 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:28:30.429040 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:28:40.428847 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:28:50.428916 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:29:00.428908 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:29:10.428827 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:29:20.428754 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:29:30.428814 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:29:40.428771 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:29:50.428849 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:30:00.453259 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:30:10.428969 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:30:20.428994 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:30:30.428833 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:30:40.428851 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:30:50.428762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:31:00.428823 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:31:10.429495 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:31:20.429019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:31:30.431826 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:31:40.428804 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:31:50.428688 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:32:00.431564 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 01:32:10.429032 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 01:32:19 up  5:14,  0 users,  load average: 0.79, 1.88, 2.57
	Linux old-k8s-version-145466 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [6741807183f4bf2b67c5393c3b58db57e71d92cf2e28212ea1f5069e6f3d1d7b] <==
	I0815 01:31:02.761887       1 main.go:299] handling current node
	I0815 01:31:12.761794       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:31:12.761829       1 main.go:299] handling current node
	I0815 01:31:22.761787       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:31:22.761831       1 main.go:299] handling current node
	W0815 01:31:23.317325       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 01:31:23.317365       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0815 01:31:29.849178       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 01:31:29.851933       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 01:31:32.761853       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:31:32.761891       1 main.go:299] handling current node
	W0815 01:31:34.923755       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 01:31:34.923790       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 01:31:42.762201       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:31:42.762240       1 main.go:299] handling current node
	I0815 01:31:52.761389       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:31:52.761423       1 main.go:299] handling current node
	W0815 01:31:59.974678       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 01:31:59.974816       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 01:32:02.761136       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:32:02.761173       1 main.go:299] handling current node
	W0815 01:32:09.702652       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 01:32:09.702764       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 01:32:12.761363       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:32:12.761569       1 main.go:299] handling current node
	
	
	==> kindnet [8217cbe4e252e51d6fedbec42329d5818041e8f1e19f3b35dd319e85b22c6bb1] <==
	E0815 01:24:37.624513       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0815 01:24:37.932821       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 01:24:37.932866       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 01:24:44.540653       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:24:44.540693       1 main.go:299] handling current node
	W0815 01:24:47.723621       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 01:24:47.723667       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 01:24:54.541258       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:24:54.541290       1 main.go:299] handling current node
	I0815 01:25:04.540309       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:25:04.540346       1 main.go:299] handling current node
	W0815 01:25:14.069085       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 01:25:14.069127       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 01:25:14.541096       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:25:14.541133       1 main.go:299] handling current node
	W0815 01:25:18.807342       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 01:25:18.807379       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 01:25:24.541171       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:25:24.541200       1 main.go:299] handling current node
	W0815 01:25:24.824544       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 01:25:24.824579       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 01:25:34.540965       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:25:34.541009       1 main.go:299] handling current node
	I0815 01:25:44.543584       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0815 01:25:44.543618       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5733bb4495fb8d0f6478c63776251feb2c864959867e0608d1ba170e61e5d645] <==
	I0815 01:28:43.416445       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 01:28:43.416460       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0815 01:29:18.028608       1 client.go:360] parsed scheme: "passthrough"
	I0815 01:29:18.028658       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 01:29:18.028669       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0815 01:29:32.475293       1 handler_proxy.go:102] no RequestInfo found in the context
	E0815 01:29:32.475374       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0815 01:29:32.475383       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:29:56.178111       1 client.go:360] parsed scheme: "passthrough"
	I0815 01:29:56.178354       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 01:29:56.178370       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0815 01:30:27.651831       1 client.go:360] parsed scheme: "passthrough"
	I0815 01:30:27.651892       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 01:30:27.651901       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0815 01:31:07.122375       1 client.go:360] parsed scheme: "passthrough"
	I0815 01:31:07.122422       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 01:31:07.122431       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0815 01:31:29.034847       1 handler_proxy.go:102] no RequestInfo found in the context
	E0815 01:31:29.034922       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0815 01:31:29.034931       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:31:43.145654       1 client.go:360] parsed scheme: "passthrough"
	I0815 01:31:43.145702       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 01:31:43.145711       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [d14132b610c987368bddf5640aaa164f967d7378bf448e729248592d6c4ff00f] <==
	I0815 01:23:43.013548       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0815 01:23:43.013580       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0815 01:23:43.023227       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0815 01:23:43.027689       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0815 01:23:43.027717       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0815 01:23:43.493924       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 01:23:43.526449       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0815 01:23:43.595552       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0815 01:23:43.596653       1 controller.go:606] quota admission added evaluator for: endpoints
	I0815 01:23:43.608076       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 01:23:44.073054       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 01:23:44.676775       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0815 01:23:45.178961       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0815 01:23:45.283638       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0815 01:24:00.767564       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0815 01:24:00.820635       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0815 01:24:18.025629       1 client.go:360] parsed scheme: "passthrough"
	I0815 01:24:18.025885       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 01:24:18.025905       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0815 01:24:55.232287       1 client.go:360] parsed scheme: "passthrough"
	I0815 01:24:55.232334       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 01:24:55.232343       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0815 01:25:29.987615       1 client.go:360] parsed scheme: "passthrough"
	I0815 01:25:29.987660       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 01:25:29.987668       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [30a64d5ca6b72871b84648d9590c4402c003b7b395e81783d62d49251a4ef4fd] <==
	W0815 01:27:54.752844       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 01:28:20.830994       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 01:28:26.403422       1 request.go:655] Throttling request took 1.048478243s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0815 01:28:27.255027       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 01:28:51.332851       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 01:28:58.905487       1 request.go:655] Throttling request took 1.048251571s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0815 01:28:59.756892       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 01:29:21.834566       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 01:29:31.407357       1 request.go:655] Throttling request took 1.048374805s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0815 01:29:32.258864       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 01:29:52.335912       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 01:30:03.909335       1 request.go:655] Throttling request took 1.048331651s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0815 01:30:04.760755       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 01:30:22.837793       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 01:30:36.411301       1 request.go:655] Throttling request took 1.048442144s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0815 01:30:37.262732       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 01:30:53.339483       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 01:31:08.913104       1 request.go:655] Throttling request took 1.048313768s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0815 01:31:09.764564       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 01:31:23.842455       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 01:31:41.415164       1 request.go:655] Throttling request took 1.048517094s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v1?timeout=32s
	W0815 01:31:42.266735       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 01:31:54.345487       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 01:32:13.917233       1 request.go:655] Throttling request took 1.048224787s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0815 01:32:14.768664       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [d52d38339364a7a925d98b1f1d7ec2475b6fe8f625acabfb0531c1c8de3f379f] <==
	I0815 01:24:00.782490       1 shared_informer.go:247] Caches are synced for TTL 
	I0815 01:24:00.788564       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0815 01:24:00.798091       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0815 01:24:00.798336       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0815 01:24:00.801807       1 shared_informer.go:247] Caches are synced for resource quota 
	I0815 01:24:00.803506       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0815 01:24:00.854588       1 shared_informer.go:247] Caches are synced for expand 
	I0815 01:24:00.854941       1 shared_informer.go:247] Caches are synced for attach detach 
	I0815 01:24:00.912252       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-bstqt"
	I0815 01:24:00.961708       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-sc7dc"
	I0815 01:24:00.965299       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hdj25"
	I0815 01:24:01.000533       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tjp7r"
	I0815 01:24:01.040177       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E0815 01:24:01.172115       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"c2189e2e-177f-4b5c-a7b0-a13bcb776305", ResourceVersion:"282", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63859281825, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400165de20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400165de40)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400165de60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400165de80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400165dea0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400165dec0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400165dee0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400165df20)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40016630e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40005a7c88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000ab7b90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000280540)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40005a7cd0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0815 01:24:01.340859       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0815 01:24:01.348339       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0815 01:24:01.348358       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0815 01:24:01.549570       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0815 01:24:01.549612       1 shared_informer.go:247] Caches are synced for resource quota 
	I0815 01:24:02.241177       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0815 01:24:02.264040       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-bstqt"
	I0815 01:24:05.750207       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0815 01:25:45.816398       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0815 01:25:45.979465       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0815 01:25:45.980526       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [65046ef81eb0e1914eb58367c5b7d798a0c27c3e9f4bc8bb8fbab2ee0b4a1418] <==
	I0815 01:26:32.758265       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0815 01:26:32.758335       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0815 01:26:32.783588       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0815 01:26:32.783673       1 server_others.go:185] Using iptables Proxier.
	I0815 01:26:32.784428       1 server.go:650] Version: v1.20.0
	I0815 01:26:32.785156       1 config.go:315] Starting service config controller
	I0815 01:26:32.785170       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0815 01:26:32.785188       1 config.go:224] Starting endpoint slice config controller
	I0815 01:26:32.785191       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0815 01:26:32.885294       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0815 01:26:32.885382       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [8f785a4ea8b90af1cd5fd5d68fa1033fe4557f9cbcc027b15570ec5b806eaea5] <==
	I0815 01:24:01.960168       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0815 01:24:01.960285       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0815 01:24:01.983998       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0815 01:24:01.984093       1 server_others.go:185] Using iptables Proxier.
	I0815 01:24:01.984296       1 server.go:650] Version: v1.20.0
	I0815 01:24:01.984790       1 config.go:315] Starting service config controller
	I0815 01:24:01.984799       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0815 01:24:01.986638       1 config.go:224] Starting endpoint slice config controller
	I0815 01:24:01.986650       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0815 01:24:02.087061       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0815 01:24:02.087139       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [f5298f1e3fd8d068a40daf1067d57275eb2c8c6a1f8685c470fe2886ad1c900d] <==
	I0815 01:23:37.937960       1 serving.go:331] Generated self-signed cert in-memory
	W0815 01:23:42.208258       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 01:23:42.208312       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 01:23:42.208322       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 01:23:42.208328       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 01:23:42.381409       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0815 01:23:42.384110       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 01:23:42.384149       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 01:23:42.386949       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0815 01:23:42.420343       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 01:23:42.420749       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 01:23:42.420945       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 01:23:42.422845       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 01:23:42.429146       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 01:23:42.429489       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 01:23:42.429691       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 01:23:42.429883       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 01:23:42.430074       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 01:23:42.430255       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 01:23:42.430445       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 01:23:42.430628       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0815 01:23:44.086724       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [ff181601fe7c6d0976ee6276372afd0764d4728118e73d5eddde8cfc09c201c8] <==
	I0815 01:26:20.918231       1 serving.go:331] Generated self-signed cert in-memory
	I0815 01:26:29.042376       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0815 01:26:29.042470       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0815 01:26:29.042477       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0815 01:26:29.042490       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0815 01:26:29.137189       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 01:26:29.137212       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 01:26:29.137237       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0815 01:26:29.137242       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0815 01:26:29.238073       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	I0815 01:26:29.238216       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0815 01:26:29.250610       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	
	
	==> kubelet <==
	Aug 15 01:30:45 old-k8s-version-145466 kubelet[664]: I0815 01:30:45.131965     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87
	Aug 15 01:30:45 old-k8s-version-145466 kubelet[664]: E0815 01:30:45.132399     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	Aug 15 01:30:54 old-k8s-version-145466 kubelet[664]: E0815 01:30:54.132932     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 01:30:56 old-k8s-version-145466 kubelet[664]: I0815 01:30:56.132194     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87
	Aug 15 01:30:56 old-k8s-version-145466 kubelet[664]: E0815 01:30:56.132938     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	Aug 15 01:31:06 old-k8s-version-145466 kubelet[664]: E0815 01:31:06.132954     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 01:31:11 old-k8s-version-145466 kubelet[664]: I0815 01:31:11.131796     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87
	Aug 15 01:31:11 old-k8s-version-145466 kubelet[664]: E0815 01:31:11.132193     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	Aug 15 01:31:20 old-k8s-version-145466 kubelet[664]: E0815 01:31:20.132734     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 01:31:23 old-k8s-version-145466 kubelet[664]: I0815 01:31:23.131725     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87
	Aug 15 01:31:23 old-k8s-version-145466 kubelet[664]: E0815 01:31:23.132188     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	Aug 15 01:31:31 old-k8s-version-145466 kubelet[664]: E0815 01:31:31.132519     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 01:31:38 old-k8s-version-145466 kubelet[664]: I0815 01:31:38.133151     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87
	Aug 15 01:31:38 old-k8s-version-145466 kubelet[664]: E0815 01:31:38.133982     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	Aug 15 01:31:42 old-k8s-version-145466 kubelet[664]: E0815 01:31:42.132790     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: I0815 01:31:53.131928     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87
	Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.132720     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	Aug 15 01:31:53 old-k8s-version-145466 kubelet[664]: E0815 01:31:53.133955     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 01:32:05 old-k8s-version-145466 kubelet[664]: E0815 01:32:05.164072     664 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 15 01:32:05 old-k8s-version-145466 kubelet[664]: E0815 01:32:05.164122     664 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 15 01:32:05 old-k8s-version-145466 kubelet[664]: E0815 01:32:05.164644     664 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-vxmd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-qvcw4_kube-system(0f08217
8-be0b-47dc-9d79-6089e5d972a6): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 15 01:32:05 old-k8s-version-145466 kubelet[664]: E0815 01:32:05.164689     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 15 01:32:06 old-k8s-version-145466 kubelet[664]: I0815 01:32:06.132249     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: ceecc0a151db1f23c823edd31cebc430d00d44479d0fcacd59b0a5acc4ba9f87
	Aug 15 01:32:06 old-k8s-version-145466 kubelet[664]: E0815 01:32:06.132574     664 pod_workers.go:191] Error syncing pod 0ae7bccd-aa53-4eba-9226-957e7ab0147b ("dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-g4fn2_kubernetes-dashboard(0ae7bccd-aa53-4eba-9226-957e7ab0147b)"
	Aug 15 01:32:16 old-k8s-version-145466 kubelet[664]: E0815 01:32:16.155763     664 pod_workers.go:191] Error syncing pod 0f082178-be0b-47dc-9d79-6089e5d972a6 ("metrics-server-9975d5f86-qvcw4_kube-system(0f082178-be0b-47dc-9d79-6089e5d972a6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [666623d148109918104f481f973b56d48ff8178973721f5b99dc53ea7fb06de6] <==
	2024/08/15 01:26:54 Using namespace: kubernetes-dashboard
	2024/08/15 01:26:54 Using in-cluster config to connect to apiserver
	2024/08/15 01:26:54 Using secret token for csrf signing
	2024/08/15 01:26:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/15 01:26:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/15 01:26:54 Successful initial request to the apiserver, version: v1.20.0
	2024/08/15 01:26:54 Generating JWE encryption key
	2024/08/15 01:26:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/15 01:26:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/15 01:26:55 Initializing JWE encryption key from synchronized object
	2024/08/15 01:26:55 Creating in-cluster Sidecar client
	2024/08/15 01:26:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 01:26:55 Serving insecurely on HTTP port: 9090
	2024/08/15 01:27:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 01:27:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 01:28:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 01:28:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 01:29:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 01:29:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 01:30:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 01:30:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 01:31:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 01:31:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 01:26:54 Starting overwatch
	
	
	==> storage-provisioner [6d60100a03bf073eb7d9cd0206c2b7f7ccc1226eac801a645d2f49361b8d4414] <==
	I0815 01:26:32.645466       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0815 01:27:02.651915       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fe2ee809aa8dd95a25065b4f7bcfc4da6c0dd8d21bb0d35584900de4bbf8e1e3] <==
	I0815 01:27:15.246335       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 01:27:15.267369       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 01:27:15.267562       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 01:27:32.722309       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 01:27:32.722636       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-145466_6e433635-ed67-4409-b228-c693b9385afe!
	I0815 01:27:32.724277       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69e73094-9862-4159-a698-8b361262d33c", APIVersion:"v1", ResourceVersion:"847", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-145466_6e433635-ed67-4409-b228-c693b9385afe became leader
	I0815 01:27:32.823919       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-145466_6e433635-ed67-4409-b228-c693b9385afe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145466 -n old-k8s-version-145466
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-145466 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-qvcw4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-145466 describe pod metrics-server-9975d5f86-qvcw4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-145466 describe pod metrics-server-9975d5f86-qvcw4: exit status 1 (202.519986ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-qvcw4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-145466 describe pod metrics-server-9975d5f86-qvcw4: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (381.71s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.28
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.31.0/json-events 7.02
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 215.4
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 15.43
34 TestAddons/parallel/Ingress 20.33
35 TestAddons/parallel/InspektorGadget 10.88
36 TestAddons/parallel/MetricsServer 5.86
39 TestAddons/parallel/CSI 64.87
40 TestAddons/parallel/Headlamp 15.71
41 TestAddons/parallel/CloudSpanner 6.59
42 TestAddons/parallel/LocalPath 8.43
43 TestAddons/parallel/NvidiaDevicePlugin 6.51
44 TestAddons/parallel/Yakd 11.89
45 TestAddons/StoppedEnableDisable 12.2
46 TestCertOptions 34.23
47 TestCertExpiration 228.63
49 TestForceSystemdFlag 42.81
50 TestForceSystemdEnv 44.33
51 TestDockerEnvContainerd 44.85
56 TestErrorSpam/setup 30.61
57 TestErrorSpam/start 0.74
58 TestErrorSpam/status 1.1
59 TestErrorSpam/pause 1.78
60 TestErrorSpam/unpause 1.94
61 TestErrorSpam/stop 12.19
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 47.84
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.15
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.31
73 TestFunctional/serial/CacheCmd/cache/add_local 1.27
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.96
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 46.38
82 TestFunctional/serial/ComponentHealth 0.09
83 TestFunctional/serial/LogsCmd 1.71
84 TestFunctional/serial/LogsFileCmd 1.71
85 TestFunctional/serial/InvalidService 5.14
87 TestFunctional/parallel/ConfigCmd 0.49
88 TestFunctional/parallel/DashboardCmd 9.56
89 TestFunctional/parallel/DryRun 0.41
90 TestFunctional/parallel/InternationalLanguage 0.22
91 TestFunctional/parallel/StatusCmd 1.23
95 TestFunctional/parallel/ServiceCmdConnect 10.68
96 TestFunctional/parallel/AddonsCmd 0.21
97 TestFunctional/parallel/PersistentVolumeClaim 25.06
99 TestFunctional/parallel/SSHCmd 0.66
100 TestFunctional/parallel/CpCmd 2.31
102 TestFunctional/parallel/FileSync 0.3
103 TestFunctional/parallel/CertSync 2.15
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
111 TestFunctional/parallel/License 0.26
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.42
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
125 TestFunctional/parallel/ServiceCmd/List 0.63
126 TestFunctional/parallel/ProfileCmd/profile_list 0.48
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.68
130 TestFunctional/parallel/MountCmd/any-port 7.6
131 TestFunctional/parallel/ServiceCmd/Format 0.37
132 TestFunctional/parallel/ServiceCmd/URL 0.47
133 TestFunctional/parallel/MountCmd/specific-port 2.37
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.25
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.27
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.93
142 TestFunctional/parallel/ImageCommands/Setup 0.77
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.46
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
153 TestFunctional/delete_echo-server_images 0.05
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 114.1
160 TestMultiControlPlane/serial/DeployApp 32.79
161 TestMultiControlPlane/serial/PingHostFromPods 1.67
162 TestMultiControlPlane/serial/AddWorkerNode 23.47
163 TestMultiControlPlane/serial/NodeLabels 0.14
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.74
165 TestMultiControlPlane/serial/CopyFile 19.05
166 TestMultiControlPlane/serial/StopSecondaryNode 12.86
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.2
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 146.23
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.38
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
173 TestMultiControlPlane/serial/StopCluster 35.95
174 TestMultiControlPlane/serial/RestartCluster 80.09
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
176 TestMultiControlPlane/serial/AddSecondaryNode 41.49
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
181 TestJSONOutput/start/Command 50.05
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.72
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.68
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.83
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 36.46
207 TestKicCustomNetwork/use_default_bridge_network 33.05
208 TestKicExistingNetwork 34.64
209 TestKicCustomSubnet 35.74
210 TestKicStaticIP 32.96
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 66.45
215 TestMountStart/serial/StartWithMountFirst 6.23
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 5.81
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.63
220 TestMountStart/serial/VerifyMountPostDelete 0.27
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 7.36
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 62.33
227 TestMultiNode/serial/DeployApp2Nodes 15.8
228 TestMultiNode/serial/PingHostFrom2Pods 1
229 TestMultiNode/serial/AddNode 17.38
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.34
232 TestMultiNode/serial/CopyFile 9.97
233 TestMultiNode/serial/StopNode 2.24
234 TestMultiNode/serial/StartAfterStop 9.46
235 TestMultiNode/serial/RestartKeepsNodes 81.79
236 TestMultiNode/serial/DeleteNode 5.22
237 TestMultiNode/serial/StopMultiNode 23.99
238 TestMultiNode/serial/RestartMultiNode 54.07
239 TestMultiNode/serial/ValidateNameConflict 35.11
244 TestPreload 113.96
246 TestScheduledStopUnix 109.44
249 TestInsufficientStorage 10.91
250 TestRunningBinaryUpgrade 94.22
252 TestKubernetesUpgrade 354.4
253 TestMissingContainerUpgrade 168.74
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
257 TestPause/serial/Start 71.7
258 TestNoKubernetes/serial/StartWithK8s 42.24
259 TestNoKubernetes/serial/StartWithStopK8s 17.87
260 TestNoKubernetes/serial/Start 5.66
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
262 TestNoKubernetes/serial/ProfileList 1.04
263 TestNoKubernetes/serial/Stop 1.23
264 TestNoKubernetes/serial/StartNoArgs 7.17
265 TestPause/serial/SecondStartNoReconfiguration 7.24
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
267 TestPause/serial/Pause 0.84
268 TestPause/serial/VerifyStatus 0.36
269 TestPause/serial/Unpause 0.86
270 TestPause/serial/PauseAgain 1.16
271 TestPause/serial/DeletePaused 4.32
272 TestPause/serial/VerifyDeletedResources 0.34
273 TestStoppedBinaryUpgrade/Setup 1.32
274 TestStoppedBinaryUpgrade/Upgrade 113.72
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.49
290 TestNetworkPlugins/group/false 4.39
295 TestStartStop/group/old-k8s-version/serial/FirstStart 154.09
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.64
298 TestStartStop/group/no-preload/serial/FirstStart 87.4
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.58
300 TestStartStop/group/old-k8s-version/serial/Stop 13.04
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
303 TestStartStop/group/no-preload/serial/DeployApp 8.38
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
305 TestStartStop/group/no-preload/serial/Stop 12.06
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
307 TestStartStop/group/no-preload/serial/SecondStart 303.21
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
311 TestStartStop/group/old-k8s-version/serial/Pause 3.08
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/embed-certs/serial/FirstStart 72.05
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
317 TestStartStop/group/no-preload/serial/Pause 3.58
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.54
320 TestStartStop/group/embed-certs/serial/DeployApp 7.43
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.5
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
323 TestStartStop/group/embed-certs/serial/Stop 12.22
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
327 TestStartStop/group/embed-certs/serial/SecondStart 273.83
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 271.98
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/embed-certs/serial/Pause 4.16
336 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
337 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.49
339 TestStartStop/group/newest-cni/serial/FirstStart 46.23
340 TestNetworkPlugins/group/auto/Start 72.06
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.03
343 TestStartStop/group/newest-cni/serial/Stop 1.39
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
345 TestStartStop/group/newest-cni/serial/SecondStart 16.67
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
349 TestStartStop/group/newest-cni/serial/Pause 3.25
350 TestNetworkPlugins/group/auto/KubeletFlags 0.37
351 TestNetworkPlugins/group/kindnet/Start 70.23
352 TestNetworkPlugins/group/auto/NetCatPod 11.43
353 TestNetworkPlugins/group/auto/DNS 0.23
354 TestNetworkPlugins/group/auto/Localhost 0.2
355 TestNetworkPlugins/group/auto/HairPin 0.24
356 TestNetworkPlugins/group/calico/Start 70.79
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.44
360 TestNetworkPlugins/group/kindnet/DNS 0.29
361 TestNetworkPlugins/group/kindnet/Localhost 0.21
362 TestNetworkPlugins/group/kindnet/HairPin 0.21
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/custom-flannel/Start 58.18
365 TestNetworkPlugins/group/calico/KubeletFlags 0.36
366 TestNetworkPlugins/group/calico/NetCatPod 11.44
367 TestNetworkPlugins/group/calico/DNS 0.23
368 TestNetworkPlugins/group/calico/Localhost 0.19
369 TestNetworkPlugins/group/calico/HairPin 0.25
370 TestNetworkPlugins/group/enable-default-cni/Start 72.05
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.34
373 TestNetworkPlugins/group/custom-flannel/DNS 0.38
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.27
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
376 TestNetworkPlugins/group/flannel/Start 50.98
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.36
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.37
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.3
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/bridge/Start 75.32
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
385 TestNetworkPlugins/group/flannel/NetCatPod 11.32
386 TestNetworkPlugins/group/flannel/DNS 0.29
387 TestNetworkPlugins/group/flannel/Localhost 0.17
388 TestNetworkPlugins/group/flannel/HairPin 0.18
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
390 TestNetworkPlugins/group/bridge/NetCatPod 9.26
391 TestNetworkPlugins/group/bridge/DNS 0.18
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (10.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-636458 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-636458 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.275890284s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-636458
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-636458: exit status 85 (71.85531ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-636458 | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC |          |
	|         | -p download-only-636458        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:36:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:36:04.444500  592665 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:36:04.444696  592665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:36:04.444723  592665 out.go:304] Setting ErrFile to fd 2...
	I0815 00:36:04.444742  592665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:36:04.445030  592665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	W0815 00:36:04.445205  592665 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19443-587265/.minikube/config/config.json: open /home/jenkins/minikube-integration/19443-587265/.minikube/config/config.json: no such file or directory
	I0815 00:36:04.445648  592665 out.go:298] Setting JSON to true
	I0815 00:36:04.446543  592665 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15487,"bootTime":1723666678,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 00:36:04.446645  592665 start.go:139] virtualization:  
	I0815 00:36:04.449768  592665 out.go:97] [download-only-636458] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0815 00:36:04.449905  592665 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19443-587265/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 00:36:04.449941  592665 notify.go:220] Checking for updates...
	I0815 00:36:04.451739  592665 out.go:169] MINIKUBE_LOCATION=19443
	I0815 00:36:04.453639  592665 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:36:04.455434  592665 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 00:36:04.457832  592665 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	I0815 00:36:04.459838  592665 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0815 00:36:04.464349  592665 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 00:36:04.464696  592665 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:36:04.485883  592665 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:36:04.486001  592665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:36:04.555831  592665 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 00:36:04.545011602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:36:04.556003  592665 docker.go:307] overlay module found
	I0815 00:36:04.558067  592665 out.go:97] Using the docker driver based on user configuration
	I0815 00:36:04.558106  592665 start.go:297] selected driver: docker
	I0815 00:36:04.558114  592665 start.go:901] validating driver "docker" against <nil>
	I0815 00:36:04.558234  592665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:36:04.610225  592665 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 00:36:04.600342596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:36:04.610394  592665 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:36:04.610680  592665 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0815 00:36:04.610849  592665 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 00:36:04.613527  592665 out.go:169] Using Docker driver with root privileges
	I0815 00:36:04.615439  592665 cni.go:84] Creating CNI manager for ""
	I0815 00:36:04.615460  592665 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 00:36:04.615470  592665 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:36:04.615546  592665 start.go:340] cluster config:
	{Name:download-only-636458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-636458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:36:04.617774  592665 out.go:97] Starting "download-only-636458" primary control-plane node in "download-only-636458" cluster
	I0815 00:36:04.617802  592665 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0815 00:36:04.619822  592665 out.go:97] Pulling base image v0.0.44-1723650208-19443 ...
	I0815 00:36:04.619889  592665 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0815 00:36:04.619970  592665 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 00:36:04.635497  592665 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:36:04.636367  592665 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 00:36:04.636475  592665 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:36:04.693917  592665 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0815 00:36:04.693950  592665 cache.go:56] Caching tarball of preloaded images
	I0815 00:36:04.694740  592665 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0815 00:36:04.697049  592665 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 00:36:04.697075  592665 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0815 00:36:04.789844  592665 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19443-587265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0815 00:36:09.051937  592665 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 00:36:10.936825  592665 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0815 00:36:10.936933  592665 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19443-587265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0815 00:36:12.031650  592665 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0815 00:36:12.032071  592665 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/download-only-636458/config.json ...
	I0815 00:36:12.032108  592665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/download-only-636458/config.json: {Name:mk62c4a6a2509fb6c513d9db3a3932d2738058c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:36:12.032735  592665 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0815 00:36:12.032981  592665 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19443-587265/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-636458 host does not exist
	  To start a cluster, run: "minikube start -p download-only-636458"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-636458
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-886391 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-886391 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.014777593s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-886391
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-886391: exit status 85 (70.741336ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-636458 | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC |                     |
	|         | -p download-only-636458        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC | 15 Aug 24 00:36 UTC |
	| delete  | -p download-only-636458        | download-only-636458 | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC | 15 Aug 24 00:36 UTC |
	| start   | -o=json --download-only        | download-only-886391 | jenkins | v1.33.1 | 15 Aug 24 00:36 UTC |                     |
	|         | -p download-only-886391        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:36:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:36:15.139354  592868 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:36:15.139475  592868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:36:15.139512  592868 out.go:304] Setting ErrFile to fd 2...
	I0815 00:36:15.139535  592868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:36:15.139829  592868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	I0815 00:36:15.140332  592868 out.go:298] Setting JSON to true
	I0815 00:36:15.141300  592868 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15498,"bootTime":1723666678,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 00:36:15.141412  592868 start.go:139] virtualization:  
	I0815 00:36:15.143834  592868 out.go:97] [download-only-886391] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 00:36:15.144090  592868 notify.go:220] Checking for updates...
	I0815 00:36:15.146119  592868 out.go:169] MINIKUBE_LOCATION=19443
	I0815 00:36:15.148802  592868 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:36:15.150741  592868 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 00:36:15.152287  592868 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	I0815 00:36:15.153902  592868 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0815 00:36:15.156814  592868 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 00:36:15.157073  592868 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:36:15.182067  592868 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:36:15.182228  592868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:36:15.248092  592868 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 00:36:15.238401368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:36:15.248206  592868 docker.go:307] overlay module found
	I0815 00:36:15.250136  592868 out.go:97] Using the docker driver based on user configuration
	I0815 00:36:15.250167  592868 start.go:297] selected driver: docker
	I0815 00:36:15.250177  592868 start.go:901] validating driver "docker" against <nil>
	I0815 00:36:15.250281  592868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:36:15.299485  592868 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 00:36:15.290721647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:36:15.299650  592868 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:36:15.299958  592868 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0815 00:36:15.300117  592868 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 00:36:15.302278  592868 out.go:169] Using Docker driver with root privileges
	I0815 00:36:15.303673  592868 cni.go:84] Creating CNI manager for ""
	I0815 00:36:15.303693  592868 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 00:36:15.303703  592868 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:36:15.303774  592868 start.go:340] cluster config:
	{Name:download-only-886391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-886391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:36:15.305439  592868 out.go:97] Starting "download-only-886391" primary control-plane node in "download-only-886391" cluster
	I0815 00:36:15.305458  592868 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0815 00:36:15.307067  592868 out.go:97] Pulling base image v0.0.44-1723650208-19443 ...
	I0815 00:36:15.307087  592868 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 00:36:15.307260  592868 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 00:36:15.321253  592868 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:36:15.321389  592868 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 00:36:15.321414  592868 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 00:36:15.321425  592868 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 00:36:15.321434  592868 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 00:36:15.361486  592868 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0815 00:36:15.361514  592868 cache.go:56] Caching tarball of preloaded images
	I0815 00:36:15.362060  592868 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 00:36:15.364173  592868 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0815 00:36:15.364213  592868 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0815 00:36:15.553753  592868 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19443-587265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-886391 host does not exist
	  To start a cluster, run: "minikube start -p download-only-886391"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-886391
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-730083 --alsologtostderr --binary-mirror http://127.0.0.1:35599 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-730083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-730083
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-428464
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-428464: exit status 85 (67.9799ms)

                                                
                                                
-- stdout --
	* Profile "addons-428464" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-428464"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-428464
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-428464: exit status 85 (64.032441ms)

                                                
                                                
-- stdout --
	* Profile "addons-428464" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-428464"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (215.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-428464 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-428464 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m35.399335165s)
--- PASS: TestAddons/Setup (215.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-428464 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-428464 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.403883ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-lmclw" [46f21c4d-b129-4b47-92a6-2655cf7b7dcb] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003974218s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sc6vh" [bf1f2246-5a9a-49e0-91af-08927e629891] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0038432s
addons_test.go:342: (dbg) Run:  kubectl --context addons-428464 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-428464 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-428464 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.415639546s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 ip
2024/08/15 00:43:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.43s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-428464 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-428464 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-428464 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [08de11d2-e04e-4b5d-941f-f915a51c6ecf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [08de11d2-e04e-4b5d-941f-f915a51c6ecf] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004055255s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-428464 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-428464 addons disable ingress-dns --alsologtostderr -v=1: (1.566381635s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-428464 addons disable ingress --alsologtostderr -v=1: (7.875655572s)
--- PASS: TestAddons/parallel/Ingress (20.33s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qnvvp" [369506d7-046c-468c-be58-27dc36f7ae0f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004682564s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-428464
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-428464: (5.869910383s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.943892ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-xtzcw" [916fbe08-9127-45ed-b5b6-a7ff268a239b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004455511s
addons_test.go:417: (dbg) Run:  kubectl --context addons-428464 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 10.706961ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-428464 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-428464 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4cc245ec-9b30-48b4-a274-f59a69bddc7f] Pending
helpers_test.go:344: "task-pv-pod" [4cc245ec-9b30-48b4-a274-f59a69bddc7f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4cc245ec-9b30-48b4-a274-f59a69bddc7f] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004679994s
addons_test.go:590: (dbg) Run:  kubectl --context addons-428464 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-428464 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-428464 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-428464 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-428464 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-428464 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-428464 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [10f32190-bacd-4182-bc80-0d4ad14012fd] Pending
helpers_test.go:344: "task-pv-pod-restore" [10f32190-bacd-4182-bc80-0d4ad14012fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [10f32190-bacd-4182-bc80-0d4ad14012fd] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004182932s
addons_test.go:632: (dbg) Run:  kubectl --context addons-428464 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-428464 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-428464 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-428464 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.835865009s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (64.87s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-428464 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-kchml" [8275c599-f6a7-4fab-b85a-2c8227bec3f2] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-kchml" [8275c599-f6a7-4fab-b85a-2c8227bec3f2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-kchml" [8275c599-f6a7-4fab-b85a-2c8227bec3f2] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004117012s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-428464 addons disable headlamp --alsologtostderr -v=1: (5.792161777s)
--- PASS: TestAddons/parallel/Headlamp (15.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-gcfgv" [81a9777f-c20c-4c5d-b091-069cc964cca3] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003269924s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-428464
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-428464 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-428464 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-428464 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6a285739-916b-470e-8bef-143e07547603] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6a285739-916b-470e-8bef-143e07547603] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6a285739-916b-470e-8bef-143e07547603] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005263075s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-428464 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 ssh "cat /opt/local-path-provisioner/pvc-a4ae316f-eaf4-4f12-8123-93bd62794b9f_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-428464 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-428464 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jbhnv" [415a8833-749a-4d27-91fa-ddb46d8b9062] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003896046s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-428464
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4fs9w" [ee4b5e11-87f1-4179-a23b-35ff62ca12f4] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.009765448s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-428464 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-428464 addons disable yakd --alsologtostderr -v=1: (5.878852721s)
--- PASS: TestAddons/parallel/Yakd (11.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-428464
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-428464: (11.940376727s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-428464
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-428464
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-428464
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (34.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-476187 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-476187 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (31.546812014s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-476187 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-476187 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-476187 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-476187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-476187
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-476187: (1.981420923s)
--- PASS: TestCertOptions (34.23s)

                                                
                                    
x
+
TestCertExpiration (228.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-480110 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-480110 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.969702721s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-480110 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-480110 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.349024611s)
helpers_test.go:175: Cleaning up "cert-expiration-480110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-480110
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-480110: (2.314549689s)
--- PASS: TestCertExpiration (228.63s)

                                                
                                    
x
+
TestForceSystemdFlag (42.81s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-246341 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-246341 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.72600099s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-246341 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-246341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-246341
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-246341: (2.554229348s)
--- PASS: TestForceSystemdFlag (42.81s)

                                                
                                    
x
+
TestForceSystemdEnv (44.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-673385 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-673385 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.545627024s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-673385 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-673385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-673385
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-673385: (2.357392493s)
--- PASS: TestForceSystemdEnv (44.33s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.85s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-358796 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-358796 --driver=docker  --container-runtime=containerd: (29.112904033s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-358796"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-358796": (1.003689438s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sfVc0bML2ueg/agent.611196" SSH_AGENT_PID="611197" DOCKER_HOST=ssh://docker@127.0.0.1:33515 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sfVc0bML2ueg/agent.611196" SSH_AGENT_PID="611197" DOCKER_HOST=ssh://docker@127.0.0.1:33515 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sfVc0bML2ueg/agent.611196" SSH_AGENT_PID="611197" DOCKER_HOST=ssh://docker@127.0.0.1:33515 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.265949743s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sfVc0bML2ueg/agent.611196" SSH_AGENT_PID="611197" DOCKER_HOST=ssh://docker@127.0.0.1:33515 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-358796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-358796
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-358796: (1.984105636s)
--- PASS: TestDockerEnvContainerd (44.85s)

                                                
                                    
x
+
TestErrorSpam/setup (30.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-769922 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-769922 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-769922 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-769922 --driver=docker  --container-runtime=containerd: (30.60876322s)
--- PASS: TestErrorSpam/setup (30.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 unpause
--- PASS: TestErrorSpam/unpause (1.94s)

                                                
                                    
x
+
TestErrorSpam/stop (12.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 stop: (12.001452222s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-769922 --log_dir /tmp/nospam-769922 stop
--- PASS: TestErrorSpam/stop (12.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19443-587265/.minikube/files/etc/test/nested/copy/592660/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-369279 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-369279 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (47.836960144s)
--- PASS: TestFunctional/serial/StartWithProxy (47.84s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-369279 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-369279 --alsologtostderr -v=8: (6.147905226s)
functional_test.go:663: soft start took 6.151305207s for "functional-369279" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-369279 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-369279 cache add registry.k8s.io/pause:3.1: (1.528058285s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-369279 cache add registry.k8s.io/pause:3.3: (1.570202651s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-369279 cache add registry.k8s.io/pause:latest: (1.207973536s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-369279 /tmp/TestFunctionalserialCacheCmdcacheadd_local3006144098/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 cache add minikube-local-cache-test:functional-369279
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 cache delete minikube-local-cache-test:functional-369279
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-369279
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-369279 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (275.940002ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-369279 cache reload: (1.05664679s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 kubectl -- --context functional-369279 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-369279 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-369279 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-369279 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.378773528s)
functional_test.go:761: restart took 46.378876182s for "functional-369279" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-369279 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-369279 logs: (1.704997557s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 logs --file /tmp/TestFunctionalserialLogsFileCmd321710071/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-369279 logs --file /tmp/TestFunctionalserialLogsFileCmd321710071/001/logs.txt: (1.712175232s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.14s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-369279 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-369279
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-369279: exit status 115 (642.574874ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30415 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-369279 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-369279 delete -f testdata/invalidsvc.yaml: (1.247701458s)
--- PASS: TestFunctional/serial/InvalidService (5.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-369279 config get cpus: exit status 14 (113.054285ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-369279 config get cpus: exit status 14 (69.777624ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-369279 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-369279 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 626309: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-369279 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-369279 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (176.977461ms)

                                                
                                                
-- stdout --
	* [functional-369279] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:49:32.868850  626007 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:49:32.869041  626007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:49:32.869069  626007 out.go:304] Setting ErrFile to fd 2...
	I0815 00:49:32.869089  626007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:49:32.869383  626007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	I0815 00:49:32.869785  626007 out.go:298] Setting JSON to false
	I0815 00:49:32.870790  626007 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16295,"bootTime":1723666678,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 00:49:32.870886  626007 start.go:139] virtualization:  
	I0815 00:49:32.872977  626007 out.go:177] * [functional-369279] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 00:49:32.875213  626007 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:49:32.875276  626007 notify.go:220] Checking for updates...
	I0815 00:49:32.879335  626007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:49:32.881396  626007 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 00:49:32.882900  626007 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	I0815 00:49:32.884513  626007 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 00:49:32.886628  626007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:49:32.889466  626007 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 00:49:32.890025  626007 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:49:32.920352  626007 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:49:32.920474  626007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:49:32.980312  626007 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 00:49:32.970788739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:49:32.980432  626007 docker.go:307] overlay module found
	I0815 00:49:32.982905  626007 out.go:177] * Using the docker driver based on existing profile
	I0815 00:49:32.985200  626007 start.go:297] selected driver: docker
	I0815 00:49:32.985221  626007 start.go:901] validating driver "docker" against &{Name:functional-369279 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-369279 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:49:32.985342  626007 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:49:32.987991  626007 out.go:177] 
	W0815 00:49:32.989883  626007 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0815 00:49:32.991857  626007 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-369279 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-369279 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-369279 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (219.136332ms)

                                                
                                                
-- stdout --
	* [functional-369279] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:49:32.676806  625928 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:49:32.677036  625928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:49:32.677066  625928 out.go:304] Setting ErrFile to fd 2...
	I0815 00:49:32.677086  625928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:49:32.678473  625928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	I0815 00:49:32.678961  625928 out.go:298] Setting JSON to false
	I0815 00:49:32.680202  625928 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16295,"bootTime":1723666678,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 00:49:32.680310  625928 start.go:139] virtualization:  
	I0815 00:49:32.684071  625928 out.go:177] * [functional-369279] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0815 00:49:32.686570  625928 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:49:32.686685  625928 notify.go:220] Checking for updates...
	I0815 00:49:32.691814  625928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:49:32.693539  625928 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 00:49:32.695306  625928 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	I0815 00:49:32.697448  625928 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 00:49:32.699960  625928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:49:32.702410  625928 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 00:49:32.702946  625928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:49:32.730681  625928 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:49:32.730785  625928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:49:32.803821  625928 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 00:49:32.793884542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:49:32.803984  625928 docker.go:307] overlay module found
	I0815 00:49:32.806404  625928 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0815 00:49:32.808237  625928 start.go:297] selected driver: docker
	I0815 00:49:32.808255  625928 start.go:901] validating driver "docker" against &{Name:functional-369279 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-369279 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:49:32.808382  625928 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:49:32.811105  625928 out.go:177] 
	W0815 00:49:32.812739  625928 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 00:49:32.814451  625928 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-369279 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-369279 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-7znrk" [0ec0e597-1f1b-41cf-8c09-120c1ad0cb82] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-7znrk" [0ec0e597-1f1b-41cf-8c09-120c1ad0cb82] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.00480144s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30198
functional_test.go:1675: http://192.168.49.2:30198: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-7znrk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30198
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [62fc1fcc-8fd5-46a8-bc9b-4086bffd94e8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007548837s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-369279 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-369279 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-369279 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-369279 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [baa64b61-7660-4885-956d-d496a7c6b9af] Pending
helpers_test.go:344: "sp-pod" [baa64b61-7660-4885-956d-d496a7c6b9af] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [baa64b61-7660-4885-956d-d496a7c6b9af] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003339765s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-369279 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-369279 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-369279 delete -f testdata/storage-provisioner/pod.yaml: (1.056797162s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-369279 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fed31442-86a2-4a5b-81a5-d7aab2177aef] Pending
helpers_test.go:344: "sp-pod" [fed31442-86a2-4a5b-81a5-d7aab2177aef] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003129381s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-369279 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh -n functional-369279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 cp functional-369279:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3525316142/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh -n functional-369279 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh -n functional-369279 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/592660/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "sudo cat /etc/test/nested/copy/592660/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/592660.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "sudo cat /etc/ssl/certs/592660.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/592660.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "sudo cat /usr/share/ca-certificates/592660.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5926602.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "sudo cat /etc/ssl/certs/5926602.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5926602.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "sudo cat /usr/share/ca-certificates/5926602.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-369279 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "sudo systemctl is-active docker"
2024/08/15 00:49:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-369279 ssh "sudo systemctl is-active docker": exit status 1 (268.176577ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-369279 ssh "sudo systemctl is-active crio": exit status 1 (277.919357ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-369279 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-369279 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-369279 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-369279 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 623633: os: process already finished
helpers_test.go:502: unable to terminate pid 623432: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-369279 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-369279 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [607e5e46-fd34-465d-ac62-a6a53c1b15ac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [607e5e46-fd34-465d-ac62-a6a53c1b15ac] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.00426506s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-369279 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.119.194 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-369279 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-369279 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-369279 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-2nskd" [df345ea1-3404-441a-808f-751a8bee76b4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-2nskd" [df345ea1-3404-441a-808f-751a8bee76b4] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004554105s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "403.640678ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "73.749589ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 service list -o json
functional_test.go:1494: Took "575.171789ms" to run "out/minikube-linux-arm64 -p functional-369279 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "408.318284ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "64.153107ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31994
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-369279 /tmp/TestFunctionalparallelMountCmdany-port1854435083/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723682969885570440" to /tmp/TestFunctionalparallelMountCmdany-port1854435083/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723682969885570440" to /tmp/TestFunctionalparallelMountCmdany-port1854435083/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723682969885570440" to /tmp/TestFunctionalparallelMountCmdany-port1854435083/001/test-1723682969885570440
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-369279 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (601.579048ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 15 00:49 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 15 00:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 15 00:49 test-1723682969885570440
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh cat /mount-9p/test-1723682969885570440
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-369279 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [157b89d2-76b2-4879-920c-73536a962acc] Pending
helpers_test.go:344: "busybox-mount" [157b89d2-76b2-4879-920c-73536a962acc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [157b89d2-76b2-4879-920c-73536a962acc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [157b89d2-76b2-4879-920c-73536a962acc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003552508s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-369279 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-369279 /tmp/TestFunctionalparallelMountCmdany-port1854435083/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31994
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-369279 /tmp/TestFunctionalparallelMountCmdspecific-port4190424597/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-369279 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (501.711461ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-369279 /tmp/TestFunctionalparallelMountCmdspecific-port4190424597/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-369279 ssh "sudo umount -f /mount-9p": exit status 1 (381.981038ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-369279 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-369279 /tmp/TestFunctionalparallelMountCmdspecific-port4190424597/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-369279 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1898158228/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-369279 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1898158228/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-369279 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1898158228/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-369279 ssh "findmnt -T" /mount1: exit status 1 (832.220664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-369279 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-369279 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1898158228/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-369279 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1898158228/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-369279 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1898158228/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-369279 version -o=json --components: (1.267361852s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-369279 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-369279
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-369279
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-369279 image ls --format short --alsologtostderr:
I0815 00:49:49.642674  628915 out.go:291] Setting OutFile to fd 1 ...
I0815 00:49:49.642848  628915 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:49:49.642854  628915 out.go:304] Setting ErrFile to fd 2...
I0815 00:49:49.642859  628915 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:49:49.643106  628915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
I0815 00:49:49.643698  628915 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 00:49:49.643802  628915 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 00:49:49.644340  628915 cli_runner.go:164] Run: docker container inspect functional-369279 --format={{.State.Status}}
I0815 00:49:49.671422  628915 ssh_runner.go:195] Run: systemctl --version
I0815 00:49:49.671471  628915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-369279
I0815 00:49:49.698384  628915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/functional-369279/id_rsa Username:docker}
I0815 00:49:49.792591  628915 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-369279 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | latest             | sha256:235ff2 | 67.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kicbase/echo-server               | functional-369279  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| docker.io/library/minikube-local-cache-test | functional-369279  | sha256:e1434d | 988B   |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | alpine             | sha256:d7cd33 | 18.3MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-369279 image ls --format table --alsologtostderr:
I0815 00:49:49.931192  628980 out.go:291] Setting OutFile to fd 1 ...
I0815 00:49:49.931371  628980 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:49:49.931391  628980 out.go:304] Setting ErrFile to fd 2...
I0815 00:49:49.931412  628980 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:49:49.931675  628980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
I0815 00:49:49.932370  628980 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 00:49:49.932526  628980 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 00:49:49.933035  628980 cli_runner.go:164] Run: docker container inspect functional-369279 --format={{.State.Status}}
I0815 00:49:49.954073  628980 ssh_runner.go:195] Run: systemctl --version
I0815 00:49:49.954129  628980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-369279
I0815 00:49:49.972286  628980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/functional-369279/id_rsa Username:docker}
I0815 00:49:50.064234  628980 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-369279 image ls --format json --alsologtostderr:
[{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["regist
ry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51","repoDigests":["docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40"],"repoTags":["docker.io/library/nginx:latest"],"size":"67647657"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"r
epoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha
256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha25
6:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-369279"],"size":"2173567"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:e1434d03815954c21bc0ef6de6a7297e6f75b789b6cdeb3eb5a69560426b64c7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-369279"],"size":"988"},{"id":"sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"18253575"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-369279 image ls --format json --alsologtostderr:
I0815 00:49:49.922975  628979 out.go:291] Setting OutFile to fd 1 ...
I0815 00:49:49.923153  628979 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:49:49.923165  628979 out.go:304] Setting ErrFile to fd 2...
I0815 00:49:49.923171  628979 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:49:49.923454  628979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
I0815 00:49:49.924189  628979 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 00:49:49.924378  628979 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 00:49:49.924927  628979 cli_runner.go:164] Run: docker container inspect functional-369279 --format={{.State.Status}}
I0815 00:49:49.954603  628979 ssh_runner.go:195] Run: systemctl --version
I0815 00:49:49.954691  628979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-369279
I0815 00:49:49.975976  628979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/functional-369279/id_rsa Username:docker}
I0815 00:49:50.075118  628979 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-369279 image ls --format yaml --alsologtostderr:
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-369279
size: "2173567"
- id: sha256:e1434d03815954c21bc0ef6de6a7297e6f75b789b6cdeb3eb5a69560426b64c7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-369279
size: "988"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
repoTags:
- docker.io/library/nginx:alpine
size: "18253575"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51
repoDigests:
- docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40
repoTags:
- docker.io/library/nginx:latest
size: "67647657"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-369279 image ls --format yaml --alsologtostderr:
I0815 00:49:49.632713  628916 out.go:291] Setting OutFile to fd 1 ...
I0815 00:49:49.632949  628916 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:49:49.632975  628916 out.go:304] Setting ErrFile to fd 2...
I0815 00:49:49.632995  628916 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:49:49.633280  628916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
I0815 00:49:49.633920  628916 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 00:49:49.634089  628916 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 00:49:49.634594  628916 cli_runner.go:164] Run: docker container inspect functional-369279 --format={{.State.Status}}
I0815 00:49:49.655597  628916 ssh_runner.go:195] Run: systemctl --version
I0815 00:49:49.655651  628916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-369279
I0815 00:49:49.686432  628916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/functional-369279/id_rsa Username:docker}
I0815 00:49:49.784268  628916 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-369279 ssh pgrep buildkitd: exit status 1 (287.20987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image build -t localhost/my-image:functional-369279 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-369279 image build -t localhost/my-image:functional-369279 testdata/build --alsologtostderr: (2.401716398s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-369279 image build -t localhost/my-image:functional-369279 testdata/build --alsologtostderr:
I0815 00:49:50.478435  629104 out.go:291] Setting OutFile to fd 1 ...
I0815 00:49:50.479092  629104 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:49:50.479105  629104 out.go:304] Setting ErrFile to fd 2...
I0815 00:49:50.479111  629104 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:49:50.479361  629104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
I0815 00:49:50.480055  629104 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 00:49:50.480660  629104 config.go:182] Loaded profile config "functional-369279": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 00:49:50.481207  629104 cli_runner.go:164] Run: docker container inspect functional-369279 --format={{.State.Status}}
I0815 00:49:50.503517  629104 ssh_runner.go:195] Run: systemctl --version
I0815 00:49:50.503594  629104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-369279
I0815 00:49:50.529702  629104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/functional-369279/id_rsa Username:docker}
I0815 00:49:50.620652  629104 build_images.go:161] Building image from path: /tmp/build.1035819598.tar
I0815 00:49:50.620724  629104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0815 00:49:50.631006  629104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1035819598.tar
I0815 00:49:50.634912  629104 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1035819598.tar: stat -c "%s %y" /var/lib/minikube/build/build.1035819598.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1035819598.tar': No such file or directory
I0815 00:49:50.634946  629104 ssh_runner.go:362] scp /tmp/build.1035819598.tar --> /var/lib/minikube/build/build.1035819598.tar (3072 bytes)
I0815 00:49:50.660588  629104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1035819598
I0815 00:49:50.669680  629104 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1035819598 -xf /var/lib/minikube/build/build.1035819598.tar
I0815 00:49:50.679015  629104 containerd.go:394] Building image: /var/lib/minikube/build/build.1035819598
I0815 00:49:50.679095  629104 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1035819598 --local dockerfile=/var/lib/minikube/build/build.1035819598 --output type=image,name=localhost/my-image:functional-369279
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:acefea40de791650a8c8c11d3d707dfefa07af8c39ec26a6e9a33c5d38cae6e2
#8 exporting manifest sha256:acefea40de791650a8c8c11d3d707dfefa07af8c39ec26a6e9a33c5d38cae6e2 0.0s done
#8 exporting config sha256:67faad950cc47236a2f3cdb11dab6e29a27f74debdb96af5f49769fd6b03a1f6 0.0s done
#8 naming to localhost/my-image:functional-369279 done
#8 DONE 0.1s
I0815 00:49:52.799809  629104 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1035819598 --local dockerfile=/var/lib/minikube/build/build.1035819598 --output type=image,name=localhost/my-image:functional-369279: (2.120683829s)
I0815 00:49:52.800017  629104 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1035819598
I0815 00:49:52.811175  629104 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1035819598.tar
I0815 00:49:52.820833  629104 build_images.go:217] Built localhost/my-image:functional-369279 from /tmp/build.1035819598.tar
I0815 00:49:52.820909  629104 build_images.go:133] succeeded building to: functional-369279
I0815 00:49:52.820930  629104 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-369279
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image load --daemon kicbase/echo-server:functional-369279 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-369279 image load --daemon kicbase/echo-server:functional-369279 --alsologtostderr: (1.122663459s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image load --daemon kicbase/echo-server:functional-369279 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-369279 image load --daemon kicbase/echo-server:functional-369279 --alsologtostderr: (1.084215271s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-369279
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image load --daemon kicbase/echo-server:functional-369279 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image save kicbase/echo-server:functional-369279 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image rm kicbase/echo-server:functional-369279 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-369279
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-369279 image save --daemon kicbase/echo-server:functional-369279 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-369279
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-369279
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-369279
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-369279
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (114.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-107675 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0815 00:49:59.350664  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:49:59.357332  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:49:59.368688  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:49:59.390114  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:49:59.431528  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:49:59.513081  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:49:59.674623  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:49:59.996306  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:00.638456  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:01.919785  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:04.481386  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:09.603141  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:19.844705  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:50:40.326119  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:51:21.288666  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-107675 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m53.247063194s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (114.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-107675 -- rollout status deployment/busybox: (29.561089284s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-79gqv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-dblbv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-dfw9z -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-79gqv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-dblbv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-dfw9z -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-79gqv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-dblbv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-dfw9z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-79gqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-79gqv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-dblbv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-dblbv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-dfw9z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-107675 -- exec busybox-7dff88458-dfw9z -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-107675 -v=7 --alsologtostderr
E0815 00:52:43.210439  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-107675 -v=7 --alsologtostderr: (22.45214384s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr: (1.019642036s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-107675 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-107675 status --output json -v=7 --alsologtostderr: (1.014686663s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp testdata/cp-test.txt ha-107675:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile458672102/001/cp-test_ha-107675.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675:/home/docker/cp-test.txt ha-107675-m02:/home/docker/cp-test_ha-107675_ha-107675-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m02 "sudo cat /home/docker/cp-test_ha-107675_ha-107675-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675:/home/docker/cp-test.txt ha-107675-m03:/home/docker/cp-test_ha-107675_ha-107675-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m03 "sudo cat /home/docker/cp-test_ha-107675_ha-107675-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675:/home/docker/cp-test.txt ha-107675-m04:/home/docker/cp-test_ha-107675_ha-107675-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m04 "sudo cat /home/docker/cp-test_ha-107675_ha-107675-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp testdata/cp-test.txt ha-107675-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile458672102/001/cp-test_ha-107675-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m02:/home/docker/cp-test.txt ha-107675:/home/docker/cp-test_ha-107675-m02_ha-107675.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675 "sudo cat /home/docker/cp-test_ha-107675-m02_ha-107675.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m02:/home/docker/cp-test.txt ha-107675-m03:/home/docker/cp-test_ha-107675-m02_ha-107675-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m03 "sudo cat /home/docker/cp-test_ha-107675-m02_ha-107675-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m02:/home/docker/cp-test.txt ha-107675-m04:/home/docker/cp-test_ha-107675-m02_ha-107675-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m04 "sudo cat /home/docker/cp-test_ha-107675-m02_ha-107675-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp testdata/cp-test.txt ha-107675-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile458672102/001/cp-test_ha-107675-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m03:/home/docker/cp-test.txt ha-107675:/home/docker/cp-test_ha-107675-m03_ha-107675.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675 "sudo cat /home/docker/cp-test_ha-107675-m03_ha-107675.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m03:/home/docker/cp-test.txt ha-107675-m02:/home/docker/cp-test_ha-107675-m03_ha-107675-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m02 "sudo cat /home/docker/cp-test_ha-107675-m03_ha-107675-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m03:/home/docker/cp-test.txt ha-107675-m04:/home/docker/cp-test_ha-107675-m03_ha-107675-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m04 "sudo cat /home/docker/cp-test_ha-107675-m03_ha-107675-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp testdata/cp-test.txt ha-107675-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile458672102/001/cp-test_ha-107675-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m04:/home/docker/cp-test.txt ha-107675:/home/docker/cp-test_ha-107675-m04_ha-107675.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675 "sudo cat /home/docker/cp-test_ha-107675-m04_ha-107675.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m04:/home/docker/cp-test.txt ha-107675-m02:/home/docker/cp-test_ha-107675-m04_ha-107675-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m02 "sudo cat /home/docker/cp-test_ha-107675-m04_ha-107675-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 cp ha-107675-m04:/home/docker/cp-test.txt ha-107675-m03:/home/docker/cp-test_ha-107675-m04_ha-107675-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 ssh -n ha-107675-m03 "sudo cat /home/docker/cp-test_ha-107675-m04_ha-107675-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-107675 node stop m02 -v=7 --alsologtostderr: (12.078141702s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr: exit status 7 (785.023538ms)

                                                
                                                
-- stdout --
	ha-107675
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-107675-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107675-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-107675-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:53:19.934305  645356 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:53:19.934468  645356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:53:19.934489  645356 out.go:304] Setting ErrFile to fd 2...
	I0815 00:53:19.934519  645356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:53:19.934778  645356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	I0815 00:53:19.934988  645356 out.go:298] Setting JSON to false
	I0815 00:53:19.935075  645356 mustload.go:65] Loading cluster: ha-107675
	I0815 00:53:19.935135  645356 notify.go:220] Checking for updates...
	I0815 00:53:19.935529  645356 config.go:182] Loaded profile config "ha-107675": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 00:53:19.935565  645356 status.go:255] checking status of ha-107675 ...
	I0815 00:53:19.936141  645356 cli_runner.go:164] Run: docker container inspect ha-107675 --format={{.State.Status}}
	I0815 00:53:19.957918  645356 status.go:330] ha-107675 host status = "Running" (err=<nil>)
	I0815 00:53:19.957941  645356 host.go:66] Checking if "ha-107675" exists ...
	I0815 00:53:19.958250  645356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107675
	I0815 00:53:19.988870  645356 host.go:66] Checking if "ha-107675" exists ...
	I0815 00:53:19.989162  645356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:53:19.989211  645356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107675
	I0815 00:53:20.021383  645356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/ha-107675/id_rsa Username:docker}
	I0815 00:53:20.117631  645356 ssh_runner.go:195] Run: systemctl --version
	I0815 00:53:20.122152  645356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:53:20.134298  645356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:53:20.198454  645356 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-15 00:53:20.188177293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:53:20.199067  645356 kubeconfig.go:125] found "ha-107675" server: "https://192.168.49.254:8443"
	I0815 00:53:20.199099  645356 api_server.go:166] Checking apiserver status ...
	I0815 00:53:20.199146  645356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:53:20.210548  645356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup
	I0815 00:53:20.220719  645356 api_server.go:182] apiserver freezer: "4:freezer:/docker/6422430dac87cc2f54ee5f2fadfbe7b488da3f53fa68806edc6a075db735cb75/kubepods/burstable/pod92f153d27116fc7e3a7f5a3b2dcc8ecd/a054200ff841fbbfbc44b7b27a79c68ce231ce47f03b06cd295f139df7a9aefc"
	I0815 00:53:20.220792  645356 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6422430dac87cc2f54ee5f2fadfbe7b488da3f53fa68806edc6a075db735cb75/kubepods/burstable/pod92f153d27116fc7e3a7f5a3b2dcc8ecd/a054200ff841fbbfbc44b7b27a79c68ce231ce47f03b06cd295f139df7a9aefc/freezer.state
	I0815 00:53:20.229695  645356 api_server.go:204] freezer state: "THAWED"
	I0815 00:53:20.229723  645356 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0815 00:53:20.237739  645356 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0815 00:53:20.237766  645356 status.go:422] ha-107675 apiserver status = Running (err=<nil>)
	I0815 00:53:20.237795  645356 status.go:257] ha-107675 status: &{Name:ha-107675 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:53:20.237828  645356 status.go:255] checking status of ha-107675-m02 ...
	I0815 00:53:20.238157  645356 cli_runner.go:164] Run: docker container inspect ha-107675-m02 --format={{.State.Status}}
	I0815 00:53:20.254875  645356 status.go:330] ha-107675-m02 host status = "Stopped" (err=<nil>)
	I0815 00:53:20.254897  645356 status.go:343] host is not running, skipping remaining checks
	I0815 00:53:20.254904  645356 status.go:257] ha-107675-m02 status: &{Name:ha-107675-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:53:20.254924  645356 status.go:255] checking status of ha-107675-m03 ...
	I0815 00:53:20.255246  645356 cli_runner.go:164] Run: docker container inspect ha-107675-m03 --format={{.State.Status}}
	I0815 00:53:20.271755  645356 status.go:330] ha-107675-m03 host status = "Running" (err=<nil>)
	I0815 00:53:20.271779  645356 host.go:66] Checking if "ha-107675-m03" exists ...
	I0815 00:53:20.272197  645356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107675-m03
	I0815 00:53:20.288891  645356 host.go:66] Checking if "ha-107675-m03" exists ...
	I0815 00:53:20.289208  645356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:53:20.289260  645356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107675-m03
	I0815 00:53:20.307981  645356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/ha-107675-m03/id_rsa Username:docker}
	I0815 00:53:20.401701  645356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:53:20.414436  645356 kubeconfig.go:125] found "ha-107675" server: "https://192.168.49.254:8443"
	I0815 00:53:20.414467  645356 api_server.go:166] Checking apiserver status ...
	I0815 00:53:20.414514  645356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:53:20.426732  645356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	I0815 00:53:20.439018  645356 api_server.go:182] apiserver freezer: "4:freezer:/docker/b55b904290442e2efa1094c70de433476d6c4ea4197730d034a3de95a340d320/kubepods/burstable/pod579579b5cce565960410d1cf04516c34/d360e43fd9ee59d4c3430e4d91945c1aedd1928c4dac8571eff4f866cf9274a3"
	I0815 00:53:20.439116  645356 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b55b904290442e2efa1094c70de433476d6c4ea4197730d034a3de95a340d320/kubepods/burstable/pod579579b5cce565960410d1cf04516c34/d360e43fd9ee59d4c3430e4d91945c1aedd1928c4dac8571eff4f866cf9274a3/freezer.state
	I0815 00:53:20.450315  645356 api_server.go:204] freezer state: "THAWED"
	I0815 00:53:20.450346  645356 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0815 00:53:20.459779  645356 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0815 00:53:20.459898  645356 status.go:422] ha-107675-m03 apiserver status = Running (err=<nil>)
	I0815 00:53:20.459915  645356 status.go:257] ha-107675-m03 status: &{Name:ha-107675-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:53:20.459933  645356 status.go:255] checking status of ha-107675-m04 ...
	I0815 00:53:20.460271  645356 cli_runner.go:164] Run: docker container inspect ha-107675-m04 --format={{.State.Status}}
	I0815 00:53:20.478523  645356 status.go:330] ha-107675-m04 host status = "Running" (err=<nil>)
	I0815 00:53:20.478545  645356 host.go:66] Checking if "ha-107675-m04" exists ...
	I0815 00:53:20.478941  645356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-107675-m04
	I0815 00:53:20.517021  645356 host.go:66] Checking if "ha-107675-m04" exists ...
	I0815 00:53:20.517335  645356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:53:20.517384  645356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-107675-m04
	I0815 00:53:20.553623  645356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/ha-107675-m04/id_rsa Username:docker}
	I0815 00:53:20.653430  645356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:53:20.669045  645356 status.go:257] ha-107675-m04 status: &{Name:ha-107675-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-107675 node start m02 -v=7 --alsologtostderr: (17.066175282s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr: (1.00457678s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (146.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-107675 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-107675 -v=7 --alsologtostderr
E0815 00:54:02.900326  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:02.906853  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:02.918226  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:02.939598  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:02.980896  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:03.062292  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:03.224521  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:03.545873  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:04.187166  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:05.468462  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:08.030006  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:13.152314  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-107675 -v=7 --alsologtostderr: (37.145096302s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-107675 --wait=true -v=7 --alsologtostderr
E0815 00:54:23.393958  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:43.875969  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:54:59.350204  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:55:24.837277  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:55:27.052176  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-107675 --wait=true -v=7 --alsologtostderr: (1m48.950140708s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-107675
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (146.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-107675 node delete m03 -v=7 --alsologtostderr: (9.412539362s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 stop -v=7 --alsologtostderr
E0815 00:56:46.759046  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-107675 stop -v=7 --alsologtostderr: (35.849826264s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr: exit status 7 (95.705776ms)

                                                
                                                
-- stdout --
	ha-107675
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107675-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-107675-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:56:53.323521  659605 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:56:53.323700  659605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:56:53.323712  659605 out.go:304] Setting ErrFile to fd 2...
	I0815 00:56:53.323718  659605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:56:53.324005  659605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	I0815 00:56:53.324231  659605 out.go:298] Setting JSON to false
	I0815 00:56:53.324294  659605 mustload.go:65] Loading cluster: ha-107675
	I0815 00:56:53.324402  659605 notify.go:220] Checking for updates...
	I0815 00:56:53.324747  659605 config.go:182] Loaded profile config "ha-107675": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 00:56:53.324765  659605 status.go:255] checking status of ha-107675 ...
	I0815 00:56:53.325571  659605 cli_runner.go:164] Run: docker container inspect ha-107675 --format={{.State.Status}}
	I0815 00:56:53.342636  659605 status.go:330] ha-107675 host status = "Stopped" (err=<nil>)
	I0815 00:56:53.342661  659605 status.go:343] host is not running, skipping remaining checks
	I0815 00:56:53.342669  659605 status.go:257] ha-107675 status: &{Name:ha-107675 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:56:53.342702  659605 status.go:255] checking status of ha-107675-m02 ...
	I0815 00:56:53.343013  659605 cli_runner.go:164] Run: docker container inspect ha-107675-m02 --format={{.State.Status}}
	I0815 00:56:53.359323  659605 status.go:330] ha-107675-m02 host status = "Stopped" (err=<nil>)
	I0815 00:56:53.359348  659605 status.go:343] host is not running, skipping remaining checks
	I0815 00:56:53.359356  659605 status.go:257] ha-107675-m02 status: &{Name:ha-107675-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:56:53.359377  659605 status.go:255] checking status of ha-107675-m04 ...
	I0815 00:56:53.359693  659605 cli_runner.go:164] Run: docker container inspect ha-107675-m04 --format={{.State.Status}}
	I0815 00:56:53.375286  659605 status.go:330] ha-107675-m04 host status = "Stopped" (err=<nil>)
	I0815 00:56:53.375311  659605 status.go:343] host is not running, skipping remaining checks
	I0815 00:56:53.375320  659605 status.go:257] ha-107675-m04 status: &{Name:ha-107675-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (80.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-107675 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-107675 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m19.10046772s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (80.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-107675 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-107675 --control-plane -v=7 --alsologtostderr: (40.517892284s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-107675 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-656202 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0815 00:59:30.600401  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-656202 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (50.044321264s)
--- PASS: TestJSONOutput/start/Command (50.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-656202 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-656202 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-656202 --output=json --user=testUser
E0815 00:59:59.350650  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-656202 --output=json --user=testUser: (5.824951033s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-449636 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-449636 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.161143ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"de087a39-9590-426e-b506-446a08f37d3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-449636] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac060b49-45cc-45c6-90b7-50138290afe2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19443"}}
	{"specversion":"1.0","id":"c50d3b59-80e2-480b-a09d-1d1d4bf9b5cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2bb0f317-13be-481b-b7f3-192e0d72a784","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig"}}
	{"specversion":"1.0","id":"683060c3-da67-4168-af1f-c34f22cf37c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube"}}
	{"specversion":"1.0","id":"1cfeaeb5-d0ea-4f06-805d-ddf0c47534c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"db73d5e4-8353-4560-bb0c-b48d44f90a17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0420611a-1a97-44fd-8d45-48acb778ec5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-449636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-449636
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-222335 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-222335 --network=: (34.367443821s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-222335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-222335
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-222335: (2.06924276s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.46s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-526633 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-526633 --network=bridge: (31.090751227s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-526633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-526633
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-526633: (1.930712227s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.05s)

                                                
                                    
x
+
TestKicExistingNetwork (34.64s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-797818 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-797818 --network=existing-network: (32.472496803s)
helpers_test.go:175: Cleaning up "existing-network-797818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-797818
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-797818: (2.000939005s)
--- PASS: TestKicExistingNetwork (34.64s)

                                                
                                    
x
+
TestKicCustomSubnet (35.74s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-760371 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-760371 --subnet=192.168.60.0/24: (33.612656059s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-760371 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-760371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-760371
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-760371: (2.104096951s)
--- PASS: TestKicCustomSubnet (35.74s)

                                                
                                    
x
+
TestKicStaticIP (32.96s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-447061 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-447061 --static-ip=192.168.200.200: (30.707146846s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-447061 ip
helpers_test.go:175: Cleaning up "static-ip-447061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-447061
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-447061: (2.098101529s)
--- PASS: TestKicStaticIP (32.96s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (66.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-267774 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-267774 --driver=docker  --container-runtime=containerd: (30.113759354s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-270671 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-270671 --driver=docker  --container-runtime=containerd: (30.580557878s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-267774
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-270671
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-270671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-270671
E0815 01:04:02.899220  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-270671: (2.037301831s)
helpers_test.go:175: Cleaning up "first-267774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-267774
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-267774: (2.222384885s)
--- PASS: TestMinikubeProfile (66.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-136497 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-136497 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.233237678s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-136497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-150339 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-150339 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.805587363s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-150339 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-136497 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-136497 --alsologtostderr -v=5: (1.631634547s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-150339 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-150339
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-150339: (1.212843557s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-150339
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-150339: (6.359011728s)
--- PASS: TestMountStart/serial/RestartStopped (7.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-150339 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-064716 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0815 01:04:59.350616  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-064716 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m1.793621394s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-064716 -- rollout status deployment/busybox: (13.926442132s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- exec busybox-7dff88458-vnjqd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- exec busybox-7dff88458-wgmzl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- exec busybox-7dff88458-vnjqd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- exec busybox-7dff88458-wgmzl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- exec busybox-7dff88458-vnjqd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- exec busybox-7dff88458-wgmzl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- exec busybox-7dff88458-vnjqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- exec busybox-7dff88458-vnjqd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- exec busybox-7dff88458-wgmzl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-064716 -- exec busybox-7dff88458-wgmzl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-064716 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-064716 -v 3 --alsologtostderr: (16.702505334s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-064716 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp testdata/cp-test.txt multinode-064716:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp multinode-064716:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile803769457/001/cp-test_multinode-064716.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp multinode-064716:/home/docker/cp-test.txt multinode-064716-m02:/home/docker/cp-test_multinode-064716_multinode-064716-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m02 "sudo cat /home/docker/cp-test_multinode-064716_multinode-064716-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp multinode-064716:/home/docker/cp-test.txt multinode-064716-m03:/home/docker/cp-test_multinode-064716_multinode-064716-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m03 "sudo cat /home/docker/cp-test_multinode-064716_multinode-064716-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp testdata/cp-test.txt multinode-064716-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp multinode-064716-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile803769457/001/cp-test_multinode-064716-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp multinode-064716-m02:/home/docker/cp-test.txt multinode-064716:/home/docker/cp-test_multinode-064716-m02_multinode-064716.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716 "sudo cat /home/docker/cp-test_multinode-064716-m02_multinode-064716.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp multinode-064716-m02:/home/docker/cp-test.txt multinode-064716-m03:/home/docker/cp-test_multinode-064716-m02_multinode-064716-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m03 "sudo cat /home/docker/cp-test_multinode-064716-m02_multinode-064716-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp testdata/cp-test.txt multinode-064716-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp multinode-064716-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile803769457/001/cp-test_multinode-064716-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp multinode-064716-m03:/home/docker/cp-test.txt multinode-064716:/home/docker/cp-test_multinode-064716-m03_multinode-064716.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716 "sudo cat /home/docker/cp-test_multinode-064716-m03_multinode-064716.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 cp multinode-064716-m03:/home/docker/cp-test.txt multinode-064716-m02:/home/docker/cp-test_multinode-064716-m03_multinode-064716-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 ssh -n multinode-064716-m02 "sudo cat /home/docker/cp-test_multinode-064716-m03_multinode-064716-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-064716 node stop m03: (1.216575108s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-064716 status: exit status 7 (499.81796ms)

                                                
                                                
-- stdout --
	multinode-064716
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-064716-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-064716-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-064716 status --alsologtostderr: exit status 7 (525.962962ms)

                                                
                                                
-- stdout --
	multinode-064716
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-064716-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-064716-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:06:19.562088  713037 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:06:19.562224  713037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:06:19.562234  713037 out.go:304] Setting ErrFile to fd 2...
	I0815 01:06:19.562240  713037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:06:19.562512  713037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	I0815 01:06:19.562691  713037 out.go:298] Setting JSON to false
	I0815 01:06:19.562721  713037 mustload.go:65] Loading cluster: multinode-064716
	I0815 01:06:19.562828  713037 notify.go:220] Checking for updates...
	I0815 01:06:19.563113  713037 config.go:182] Loaded profile config "multinode-064716": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 01:06:19.563125  713037 status.go:255] checking status of multinode-064716 ...
	I0815 01:06:19.563603  713037 cli_runner.go:164] Run: docker container inspect multinode-064716 --format={{.State.Status}}
	I0815 01:06:19.583003  713037 status.go:330] multinode-064716 host status = "Running" (err=<nil>)
	I0815 01:06:19.583026  713037 host.go:66] Checking if "multinode-064716" exists ...
	I0815 01:06:19.583390  713037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-064716
	I0815 01:06:19.611758  713037 host.go:66] Checking if "multinode-064716" exists ...
	I0815 01:06:19.612217  713037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 01:06:19.612293  713037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-064716
	I0815 01:06:19.633351  713037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33650 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/multinode-064716/id_rsa Username:docker}
	I0815 01:06:19.733276  713037 ssh_runner.go:195] Run: systemctl --version
	I0815 01:06:19.737541  713037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:06:19.749786  713037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 01:06:19.812943  713037 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-15 01:06:19.803505098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 01:06:19.813524  713037 kubeconfig.go:125] found "multinode-064716" server: "https://192.168.67.2:8443"
	I0815 01:06:19.813558  713037 api_server.go:166] Checking apiserver status ...
	I0815 01:06:19.813602  713037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:06:19.824931  713037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	I0815 01:06:19.834857  713037 api_server.go:182] apiserver freezer: "4:freezer:/docker/b1adc72db26f0af98a2cc3d9430a2953a42a966b39a03966b314219c34db0a71/kubepods/burstable/podb4b43e52fb3ce23a394d803c360293a1/7cd754a7f230193d9192bbc8b7898e2688c60f15b4a832cdc5e8943d320357eb"
	I0815 01:06:19.834936  713037 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b1adc72db26f0af98a2cc3d9430a2953a42a966b39a03966b314219c34db0a71/kubepods/burstable/podb4b43e52fb3ce23a394d803c360293a1/7cd754a7f230193d9192bbc8b7898e2688c60f15b4a832cdc5e8943d320357eb/freezer.state
	I0815 01:06:19.843580  713037 api_server.go:204] freezer state: "THAWED"
	I0815 01:06:19.843625  713037 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0815 01:06:19.851491  713037 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0815 01:06:19.851522  713037 status.go:422] multinode-064716 apiserver status = Running (err=<nil>)
	I0815 01:06:19.851534  713037 status.go:257] multinode-064716 status: &{Name:multinode-064716 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 01:06:19.851559  713037 status.go:255] checking status of multinode-064716-m02 ...
	I0815 01:06:19.851991  713037 cli_runner.go:164] Run: docker container inspect multinode-064716-m02 --format={{.State.Status}}
	I0815 01:06:19.869612  713037 status.go:330] multinode-064716-m02 host status = "Running" (err=<nil>)
	I0815 01:06:19.869640  713037 host.go:66] Checking if "multinode-064716-m02" exists ...
	I0815 01:06:19.869951  713037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-064716-m02
	I0815 01:06:19.885614  713037 host.go:66] Checking if "multinode-064716-m02" exists ...
	I0815 01:06:19.885927  713037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 01:06:19.885972  713037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-064716-m02
	I0815 01:06:19.904349  713037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33655 SSHKeyPath:/home/jenkins/minikube-integration/19443-587265/.minikube/machines/multinode-064716-m02/id_rsa Username:docker}
	I0815 01:06:19.997009  713037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:06:20.014104  713037 status.go:257] multinode-064716-m02 status: &{Name:multinode-064716-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0815 01:06:20.014147  713037 status.go:255] checking status of multinode-064716-m03 ...
	I0815 01:06:20.014518  713037 cli_runner.go:164] Run: docker container inspect multinode-064716-m03 --format={{.State.Status}}
	I0815 01:06:20.033203  713037 status.go:330] multinode-064716-m03 host status = "Stopped" (err=<nil>)
	I0815 01:06:20.033229  713037 status.go:343] host is not running, skipping remaining checks
	I0815 01:06:20.033237  713037 status.go:257] multinode-064716-m03 status: &{Name:multinode-064716-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 node start m03 -v=7 --alsologtostderr
E0815 01:06:22.413486  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-064716 node start m03 -v=7 --alsologtostderr: (8.691585216s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-064716
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-064716
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-064716: (24.940565783s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-064716 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-064716 --wait=true -v=8 --alsologtostderr: (56.737109902s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-064716
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-064716 node delete m03: (4.578327352s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-064716 stop: (23.811808302s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-064716 status: exit status 7 (87.719779ms)

                                                
                                                
-- stdout --
	multinode-064716
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-064716-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-064716 status --alsologtostderr: exit status 7 (87.120414ms)

                                                
                                                
-- stdout --
	multinode-064716
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-064716-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:08:20.461079  721029 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:08:20.461238  721029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:08:20.461247  721029 out.go:304] Setting ErrFile to fd 2...
	I0815 01:08:20.461252  721029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:08:20.461525  721029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	I0815 01:08:20.461721  721029 out.go:298] Setting JSON to false
	I0815 01:08:20.461759  721029 mustload.go:65] Loading cluster: multinode-064716
	I0815 01:08:20.461855  721029 notify.go:220] Checking for updates...
	I0815 01:08:20.462176  721029 config.go:182] Loaded profile config "multinode-064716": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 01:08:20.462196  721029 status.go:255] checking status of multinode-064716 ...
	I0815 01:08:20.462719  721029 cli_runner.go:164] Run: docker container inspect multinode-064716 --format={{.State.Status}}
	I0815 01:08:20.479840  721029 status.go:330] multinode-064716 host status = "Stopped" (err=<nil>)
	I0815 01:08:20.479887  721029 status.go:343] host is not running, skipping remaining checks
	I0815 01:08:20.479895  721029 status.go:257] multinode-064716 status: &{Name:multinode-064716 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 01:08:20.479960  721029 status.go:255] checking status of multinode-064716-m02 ...
	I0815 01:08:20.480280  721029 cli_runner.go:164] Run: docker container inspect multinode-064716-m02 --format={{.State.Status}}
	I0815 01:08:20.498592  721029 status.go:330] multinode-064716-m02 host status = "Stopped" (err=<nil>)
	I0815 01:08:20.498613  721029 status.go:343] host is not running, skipping remaining checks
	I0815 01:08:20.498621  721029 status.go:257] multinode-064716-m02 status: &{Name:multinode-064716-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-064716 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0815 01:09:02.898640  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-064716 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.404670361s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-064716 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.07s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-064716
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-064716-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-064716-m02 --driver=docker  --container-runtime=containerd: exit status 14 (74.407117ms)

                                                
                                                
-- stdout --
	* [multinode-064716-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-064716-m02' is duplicated with machine name 'multinode-064716-m02' in profile 'multinode-064716'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-064716-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-064716-m03 --driver=docker  --container-runtime=containerd: (32.675819701s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-064716
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-064716: exit status 80 (327.675201ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-064716 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-064716-m03 already exists in multinode-064716-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-064716-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-064716-m03: (1.986080933s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.11s)

                                                
                                    
x
+
TestPreload (113.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-507324 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0815 01:09:59.350388  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:10:25.961803  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-507324 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m18.405030305s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-507324 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-507324 image pull gcr.io/k8s-minikube/busybox: (1.193370812s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-507324
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-507324: (12.072153555s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-507324 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-507324 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (19.424503886s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-507324 image list
helpers_test.go:175: Cleaning up "test-preload-507324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-507324
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-507324: (2.451521306s)
--- PASS: TestPreload (113.96s)

                                                
                                    
x
+
TestScheduledStopUnix (109.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-507552 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-507552 --memory=2048 --driver=docker  --container-runtime=containerd: (33.388352192s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-507552 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-507552 -n scheduled-stop-507552
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-507552 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-507552 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-507552 -n scheduled-stop-507552
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-507552
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-507552 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-507552
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-507552: exit status 7 (63.244084ms)

                                                
                                                
-- stdout --
	scheduled-stop-507552
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-507552 -n scheduled-stop-507552
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-507552 -n scheduled-stop-507552: exit status 7 (69.283959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-507552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-507552
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-507552: (4.541673319s)
--- PASS: TestScheduledStopUnix (109.44s)

                                                
                                    
x
+
TestInsufficientStorage (10.91s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-715093 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-715093 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.477331465s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6397f8ae-c0ac-42f2-9da2-6feb8d2826c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-715093] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"30b5ffc9-3967-4f46-b922-ab9329e71611","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19443"}}
	{"specversion":"1.0","id":"85168f87-98d6-4263-9b3b-8faf322f5f2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"55bb1109-843b-4d31-b486-be6085d08b11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig"}}
	{"specversion":"1.0","id":"8c4ec066-8d4a-40c5-99e8-2835909f1b50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube"}}
	{"specversion":"1.0","id":"b3b29a96-2bef-4586-9c2d-e66e10fa9665","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"31d13604-2246-4cc7-a2a9-0f3a3d6235dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c83d3f2b-c530-495a-a125-e0716fefcfd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"17cffb3c-6d65-4fd0-a7e6-6b0a13d00410","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8d0ba9ba-8994-41c7-90fc-4a562bbe39c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"83b2ce05-b500-4e39-8343-831a5564f314","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"91ca683f-cf11-40f5-a38d-091c25ad78e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-715093\" primary control-plane node in \"insufficient-storage-715093\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"963e2d70-ed76-4327-8332-07dd55790010","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723650208-19443 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6cdb8a7-3ca8-4386-93ab-3ccf6e861f2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2aa37e57-c3f5-4ba2-8ac5-d725b1c554ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-715093 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-715093 --output=json --layout=cluster: exit status 7 (277.259072ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-715093","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-715093","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:13:45.829452  739702 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-715093" does not appear in /home/jenkins/minikube-integration/19443-587265/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-715093 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-715093 --output=json --layout=cluster: exit status 7 (270.094894ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-715093","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-715093","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:13:46.101179  739765 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-715093" does not appear in /home/jenkins/minikube-integration/19443-587265/kubeconfig
	E0815 01:13:46.111129  739765 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/insufficient-storage-715093/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-715093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-715093
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-715093: (1.881425756s)
--- PASS: TestInsufficientStorage (10.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3941948445 start -p running-upgrade-974082 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0815 01:19:59.350216  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3941948445 start -p running-upgrade-974082 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.279779362s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-974082 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-974082 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.035436603s)
helpers_test.go:175: Cleaning up "running-upgrade-974082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-974082
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-974082: (3.06170325s)
--- PASS: TestRunningBinaryUpgrade (94.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-244083 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-244083 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.050834745s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-244083
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-244083: (1.323104893s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-244083 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-244083 status --format={{.Host}}: exit status 7 (105.288611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-244083 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-244083 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m39.115728671s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-244083 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-244083 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-244083 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (227.714594ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-244083] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-244083
	    minikube start -p kubernetes-upgrade-244083 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2440832 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-244083 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-244083 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-244083 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.12070781s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-244083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-244083
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-244083: (3.245843111s)
--- PASS: TestKubernetesUpgrade (354.40s)

                                                
                                    
x
+
TestMissingContainerUpgrade (168.74s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2174002638 start -p missing-upgrade-505022 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2174002638 start -p missing-upgrade-505022 --memory=2200 --driver=docker  --container-runtime=containerd: (1m27.51789809s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-505022
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-505022: (10.286239998s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-505022
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-505022 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-505022 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.278430472s)
helpers_test.go:175: Cleaning up "missing-upgrade-505022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-505022
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-505022: (2.256973535s)
--- PASS: TestMissingContainerUpgrade (168.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-793283 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-793283 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (112.542312ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-793283] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestPause/serial/Start (71.7s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-807297 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-807297 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m11.697719047s)
--- PASS: TestPause/serial/Start (71.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-793283 --driver=docker  --container-runtime=containerd
E0815 01:14:02.898545  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-793283 --driver=docker  --container-runtime=containerd: (41.727725579s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-793283 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-793283 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-793283 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.608777456s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-793283 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-793283 status -o json: exit status 2 (336.674068ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-793283","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-793283
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-793283: (1.92539105s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-793283 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-793283 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.655862119s)
--- PASS: TestNoKubernetes/serial/Start (5.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-793283 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-793283 "sudo systemctl is-active --quiet service kubelet": exit status 1 (276.487609ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-793283
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-793283: (1.22870676s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-793283 --driver=docker  --container-runtime=containerd
E0815 01:14:59.350620  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-793283 --driver=docker  --container-runtime=containerd: (7.1695741s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.17s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-807297 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-807297 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.223721925s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-793283 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-793283 "sudo systemctl is-active --quiet service kubelet": exit status 1 (345.189375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestPause/serial/Pause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-807297 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-807297 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-807297 --output=json --layout=cluster: exit status 2 (356.201791ms)

                                                
                                                
-- stdout --
	{"Name":"pause-807297","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-807297","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-807297 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.16s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-807297 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-807297 --alsologtostderr -v=5: (1.156403595s)
--- PASS: TestPause/serial/PauseAgain (1.16s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.32s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-807297 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-807297 --alsologtostderr -v=5: (4.316211384s)
--- PASS: TestPause/serial/DeletePaused (4.32s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-807297
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-807297: exit status 1 (47.846987ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-807297: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (113.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1977467941 start -p stopped-upgrade-682503 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1977467941 start -p stopped-upgrade-682503 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.378998501s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1977467941 -p stopped-upgrade-682503 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1977467941 -p stopped-upgrade-682503 stop: (19.938765391s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-682503 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0815 01:19:02.898746  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-682503 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (47.401534778s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (113.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-682503
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-682503: (1.491993923s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-404506 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-404506 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (223.973714ms)

                                                
                                                
-- stdout --
	* [false-404506] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:21:32.042970  780655 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:21:32.043663  780655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:21:32.043692  780655 out.go:304] Setting ErrFile to fd 2...
	I0815 01:21:32.043718  780655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:21:32.044053  780655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-587265/.minikube/bin
	I0815 01:21:32.044543  780655 out.go:298] Setting JSON to false
	I0815 01:21:32.045714  780655 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18214,"bootTime":1723666678,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 01:21:32.045834  780655 start.go:139] virtualization:  
	I0815 01:21:32.048722  780655 out.go:177] * [false-404506] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 01:21:32.050148  780655 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:21:32.050311  780655 notify.go:220] Checking for updates...
	I0815 01:21:32.054025  780655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:21:32.055805  780655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-587265/kubeconfig
	I0815 01:21:32.057853  780655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-587265/.minikube
	I0815 01:21:32.059487  780655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 01:21:32.061462  780655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:21:32.064303  780655 config.go:182] Loaded profile config "force-systemd-flag-246341": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 01:21:32.064412  780655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:21:32.088477  780655 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 01:21:32.088610  780655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 01:21:32.176586  780655 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 01:21:32.163350235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 01:21:32.176697  780655 docker.go:307] overlay module found
	I0815 01:21:32.180342  780655 out.go:177] * Using the docker driver based on user configuration
	I0815 01:21:32.182222  780655 start.go:297] selected driver: docker
	I0815 01:21:32.182269  780655 start.go:901] validating driver "docker" against <nil>
	I0815 01:21:32.182299  780655 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:21:32.184829  780655 out.go:177] 
	W0815 01:21:32.186866  780655 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0815 01:21:32.189045  780655 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-404506 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-404506" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-404506" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-404506

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-404506"

                                                
                                                
----------------------- debugLogs end: false-404506 [took: 4.012159378s] --------------------------------
helpers_test.go:175: Cleaning up "false-404506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-404506
--- PASS: TestNetworkPlugins/group/false (4.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (154.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-145466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0815 01:23:02.415714  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:24:02.899226  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:24:59.350565  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-145466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m34.092992705s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (154.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-145466 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af0a66f7-b66e-4568-b540-1d72418f33b3] Pending
helpers_test.go:344: "busybox" [af0a66f7-b66e-4568-b540-1d72418f33b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af0a66f7-b66e-4568-b540-1d72418f33b3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004016405s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-145466 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (87.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-891255 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-891255 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m27.394800781s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (87.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-145466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-145466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.435151312s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-145466 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-145466 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-145466 --alsologtostderr -v=3: (13.037438475s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-145466 -n old-k8s-version-145466
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-145466 -n old-k8s-version-145466: exit status 7 (68.253707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-145466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-891255 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c0d6c323-f647-48a5-8169-3deb2b926c2c] Pending
helpers_test.go:344: "busybox" [c0d6c323-f647-48a5-8169-3deb2b926c2c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c0d6c323-f647-48a5-8169-3deb2b926c2c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003561898s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-891255 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-891255 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-891255 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.078754281s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-891255 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-891255 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-891255 --alsologtostderr -v=3: (12.054998723s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-891255 -n no-preload-891255
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-891255 -n no-preload-891255: exit status 7 (93.24234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-891255 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (303.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-891255 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0815 01:29:02.899025  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:29:59.349995  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-891255 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (5m2.711528413s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-891255 -n no-preload-891255
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (303.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9rfcm" [8ee12d69-b96e-41ff-82ff-3cf6d42d5d23] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004058648s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9rfcm" [8ee12d69-b96e-41ff-82ff-3cf6d42d5d23] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003962914s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-145466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-145466 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-145466 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145466 -n old-k8s-version-145466
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145466 -n old-k8s-version-145466: exit status 2 (312.729637ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-145466 -n old-k8s-version-145466
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-145466 -n old-k8s-version-145466: exit status 2 (328.350415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-145466 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145466 -n old-k8s-version-145466
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-145466 -n old-k8s-version-145466
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gqnj5" [e2190dfc-0fe6-40c9-a1de-85f5c04dec13] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004109381s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-454288 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-454288 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m12.048195911s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gqnj5" [e2190dfc-0fe6-40c9-a1de-85f5c04dec13] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004504056s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-891255 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-891255 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-891255 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-891255 --alsologtostderr -v=1: (1.383099556s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-891255 -n no-preload-891255
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-891255 -n no-preload-891255: exit status 2 (380.076303ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-891255 -n no-preload-891255
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-891255 -n no-preload-891255: exit status 2 (326.317577ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-891255 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-891255 -n no-preload-891255
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-891255 -n no-preload-891255
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-296046 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-296046 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m0.541476771s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-454288 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1d9d50c1-3741-4473-af67-858fac88d175] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1d9d50c1-3741-4473-af67-858fac88d175] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004038761s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-454288 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-296046 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b1024c17-dce0-482b-b7b7-318287bf76dd] Pending
helpers_test.go:344: "busybox" [b1024c17-dce0-482b-b7b7-318287bf76dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b1024c17-dce0-482b-b7b7-318287bf76dd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004177824s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-296046 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-454288 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-454288 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-454288 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-454288 --alsologtostderr -v=3: (12.216304094s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-296046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-296046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.110854068s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-296046 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-296046 --alsologtostderr -v=3
E0815 01:34:02.899008  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-296046 --alsologtostderr -v=3: (11.966187477s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-454288 -n embed-certs-454288
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-454288 -n embed-certs-454288: exit status 7 (72.850196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-454288 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (273.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-454288 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-454288 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m33.388374427s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-454288 -n embed-certs-454288
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (273.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-296046 -n default-k8s-diff-port-296046
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-296046 -n default-k8s-diff-port-296046: exit status 7 (70.354105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-296046 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-296046 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0815 01:34:59.350739  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:35.471035  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:35.477419  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:35.488782  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:35.510167  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:35.551572  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:35.632974  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:35.794463  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:36.116133  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:36.757812  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:38.039427  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:40.601102  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:45.722749  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:35:55.964094  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:36:16.445445  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:36:57.406941  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:08.499952  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:08.506517  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:08.518133  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:08.539502  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:08.580904  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:08.662435  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:08.824191  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:09.145834  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:09.787838  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:11.070001  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:13.632048  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:18.753555  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:28.995660  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:37:49.478023  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:38:19.328359  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:38:30.440058  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-296046 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m31.627740984s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-296046 -n default-k8s-diff-port-296046
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8k656" [55ad9668-3151-4f04-ad53-2ed0239fd18a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003858041s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bmpbk" [8c41792f-4c25-4531-85be-1421815b01df] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007380955s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8k656" [55ad9668-3151-4f04-ad53-2ed0239fd18a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008398287s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-454288 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bmpbk" [8c41792f-4c25-4531-85be-1421815b01df] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004993914s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-296046 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-454288 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-454288 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-454288 --alsologtostderr -v=1: (1.034817258s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-454288 -n embed-certs-454288
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-454288 -n embed-certs-454288: exit status 2 (415.525821ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-454288 -n embed-certs-454288
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-454288 -n embed-certs-454288: exit status 2 (411.272682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-454288 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-454288 -n embed-certs-454288
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-454288 -n embed-certs-454288
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-296046 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-296046 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-296046 --alsologtostderr -v=1: (1.084220027s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-296046 -n default-k8s-diff-port-296046
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-296046 -n default-k8s-diff-port-296046: exit status 2 (411.860045ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-296046 -n default-k8s-diff-port-296046
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-296046 -n default-k8s-diff-port-296046: exit status 2 (385.428167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-296046 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-296046 --alsologtostderr -v=1: (1.328223139s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-296046 -n default-k8s-diff-port-296046
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-296046 -n default-k8s-diff-port-296046
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-295812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-295812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (46.232716871s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0815 01:39:42.417713  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m12.063075136s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-295812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0815 01:39:52.362267  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-295812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.031816069s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-295812 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-295812 --alsologtostderr -v=3: (1.385894612s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-295812 -n newest-cni-295812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-295812 -n newest-cni-295812: exit status 7 (74.739246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-295812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-295812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0815 01:39:59.349965  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/addons-428464/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-295812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (16.266749328s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-295812 -n newest-cni-295812
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-295812 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-295812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-295812 -n newest-cni-295812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-295812 -n newest-cni-295812: exit status 2 (329.766323ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-295812 -n newest-cni-295812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-295812 -n newest-cni-295812: exit status 2 (322.843303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-295812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-295812 -n newest-cni-295812
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-295812 -n newest-cni-295812
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.25s)
E0815 01:45:18.430256  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:18.436653  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:18.448076  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:18.469506  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:18.511214  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:18.592705  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:18.754217  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:19.075912  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:19.718084  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:20.999713  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:23.561488  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:28.683458  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:35.470110  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:45:38.925743  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-404506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m10.234356494s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-404506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4b7jj" [ace9d856-495a-4ae7-b677-aee1d9b818f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4b7jj" [ace9d856-495a-4ae7-b677-aee1d9b818f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006081666s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-404506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0815 01:41:03.170179  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/old-k8s-version-145466/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m10.793714263s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6vtfx" [dccd878b-2603-45f6-92a3-9aa9d8d60809] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004902069s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-404506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-404506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n86zm" [012a6f5b-4d1e-427c-991c-17c0b5d56c19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n86zm" [012a6f5b-4d1e-427c-991c-17c0b5d56c19] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005816305s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-404506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2nh2h" [8dab17e4-eefa-4260-b6b2-83f35757bc1b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004608231s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0815 01:42:08.499682  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/no-preload-891255/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (58.178014019s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-404506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-404506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xgs48" [275688d2-75ba-4ddb-a4ce-e45bb405a5ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xgs48" [275688d2-75ba-4ddb-a4ce-e45bb405a5ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004932338s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-404506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m12.053062366s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-404506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-404506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zz22d" [079a4d84-31cd-4ed6-88ec-0c7d978180d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zz22d" [079a4d84-31cd-4ed6-88ec-0c7d978180d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004097472s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-404506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0815 01:43:45.968031  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:43:52.352991  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:43:52.359382  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:43:52.370855  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:43:52.392227  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:43:52.433606  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:43:52.515097  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:43:52.676658  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:43:52.997931  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:43:53.639682  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:43:54.921817  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:43:57.484106  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:44:02.606293  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (50.975396996s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-404506 "pgrep -a kubelet"
E0815 01:44:02.903962  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/functional-369279/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-404506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wtt8b" [1a8aac74-75e7-4405-9d3f-0ebb5fb18b9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wtt8b" [1a8aac74-75e7-4405-9d3f-0ebb5fb18b9a] Running
E0815 01:44:12.848065  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/default-k8s-diff-port-296046/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003238856s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-404506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vm5gt" [f575c7b4-f7dd-4a25-b54c-673729b691f2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005877441s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-404506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m15.31606033s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-404506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-404506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n6x49" [b811b8d2-af22-4b77-b4b8-9b3a11a51ef2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n6x49" [b811b8d2-af22-4b77-b4b8-9b3a11a51ef2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004766938s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-404506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-404506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-404506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nxt5t" [54dfa8c1-2222-4216-aff2-4299ad031b12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nxt5t" [54dfa8c1-2222-4216-aff2-4299ad031b12] Running
E0815 01:45:59.407483  592660 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-587265/.minikube/profiles/auto-404506/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004294055s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-404506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-404506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-742911 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-742911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-742911
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-127245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-127245
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-404506 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-404506" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-404506" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-404506

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-404506"

                                                
                                                
----------------------- debugLogs end: kubenet-404506 [took: 4.161060196s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-404506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-404506
--- SKIP: TestNetworkPlugins/group/kubenet (4.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-404506 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-404506" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-404506

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-404506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-404506"

                                                
                                                
----------------------- debugLogs end: cilium-404506 [took: 5.891845355s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-404506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-404506
--- SKIP: TestNetworkPlugins/group/cilium (6.08s)

                                                
                                    
Copied to clipboard