Test Report: Docker_Linux_containerd_arm64 19529

                    
                      d7f9f66bdcb95e27f1005d5ce9d414c92a72aaf8:2024-08-28:35983
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.87
302 TestStartStop/group/old-k8s-version/serial/SecondStart 383.38
x
+
TestAddons/serial/Volcano (199.87s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 40.250546ms
addons_test.go:897: volcano-scheduler stabilized in 40.378061ms
addons_test.go:913: volcano-controller stabilized in 40.443858ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-g52qk" [c20956e7-48c7-45a1-b114-f15e80ed770b] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003723355s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-phvl2" [db0ad15b-1b46-478b-a106-3482b6541bff] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004558022s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-l985r" [6e9befc4-b3ff-441f-90b4-d32cbd5c073b] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0037937s
addons_test.go:932: (dbg) Run:  kubectl --context addons-606058 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-606058 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-606058 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [8a170c3c-b0c6-48f0-a384-d646076e8d41] Pending
helpers_test.go:344: "test-job-nginx-0" [8a170c3c-b0c6-48f0-a384-d646076e8d41] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-606058 -n addons-606058
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-28 17:55:12.721574774 +0000 UTC m=+435.413714687
addons_test.go:964: (dbg) Run:  kubectl --context addons-606058 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-606058 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-21c14280-9ead-4770-aba3-1c9d88e6a713
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kqvgf (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-kqvgf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-606058 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-606058 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-606058
helpers_test.go:235: (dbg) docker inspect addons-606058:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba7fd7a7df26e5864c3cddb4958c1a11cf0d074c58d0723635d40cef8703f8da",
	        "Created": "2024-08-28T17:48:40.115859028Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301451,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-28T17:48:40.289463083Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cc8dc59c2b679153d99f84cc70dab3e87225f8a0d04f61969b54714a9c4cd4d",
	        "ResolvConfPath": "/var/lib/docker/containers/ba7fd7a7df26e5864c3cddb4958c1a11cf0d074c58d0723635d40cef8703f8da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba7fd7a7df26e5864c3cddb4958c1a11cf0d074c58d0723635d40cef8703f8da/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba7fd7a7df26e5864c3cddb4958c1a11cf0d074c58d0723635d40cef8703f8da/hosts",
	        "LogPath": "/var/lib/docker/containers/ba7fd7a7df26e5864c3cddb4958c1a11cf0d074c58d0723635d40cef8703f8da/ba7fd7a7df26e5864c3cddb4958c1a11cf0d074c58d0723635d40cef8703f8da-json.log",
	        "Name": "/addons-606058",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-606058:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-606058",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9072efb74f60860eb2dcfbdc676659792e47a344e491a4eebe993ad90afd64d1-init/diff:/var/lib/docker/overlay2/68d9a87ad0f678e89d4bd37593e54708aeddbc1992258326f1e13c1ad826f200/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9072efb74f60860eb2dcfbdc676659792e47a344e491a4eebe993ad90afd64d1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9072efb74f60860eb2dcfbdc676659792e47a344e491a4eebe993ad90afd64d1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9072efb74f60860eb2dcfbdc676659792e47a344e491a4eebe993ad90afd64d1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-606058",
	                "Source": "/var/lib/docker/volumes/addons-606058/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-606058",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-606058",
	                "name.minikube.sigs.k8s.io": "addons-606058",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43337f54cdc85e907c659e5ab5d5a52a52c1a513b5c810aad0547639964b1801",
	            "SandboxKey": "/var/run/docker/netns/43337f54cdc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-606058": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "465c1eb3f10dd948f85e5bd2bf9400b423bc9818195afdf2608f02431ac4fd0f",
	                    "EndpointID": "45b988f5a8b3a826b4203dd073a4c59d4b13201d16317b859cfffaf1054df473",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-606058",
	                        "ba7fd7a7df26"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-606058 -n addons-606058
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-606058 logs -n 25: (1.601292848s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-567694   | jenkins | v1.33.1 | 28 Aug 24 17:47 UTC |                     |
	|         | -p download-only-567694              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC | 28 Aug 24 17:48 UTC |
	| delete  | -p download-only-567694              | download-only-567694   | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC | 28 Aug 24 17:48 UTC |
	| start   | -o=json --download-only              | download-only-300361   | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC |                     |
	|         | -p download-only-300361              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC | 28 Aug 24 17:48 UTC |
	| delete  | -p download-only-300361              | download-only-300361   | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC | 28 Aug 24 17:48 UTC |
	| delete  | -p download-only-567694              | download-only-567694   | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC | 28 Aug 24 17:48 UTC |
	| delete  | -p download-only-300361              | download-only-300361   | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC | 28 Aug 24 17:48 UTC |
	| start   | --download-only -p                   | download-docker-843459 | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC |                     |
	|         | download-docker-843459               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-843459            | download-docker-843459 | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC | 28 Aug 24 17:48 UTC |
	| start   | --download-only -p                   | binary-mirror-394784   | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC |                     |
	|         | binary-mirror-394784                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35691               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-394784              | binary-mirror-394784   | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC | 28 Aug 24 17:48 UTC |
	| addons  | disable dashboard -p                 | addons-606058          | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC |                     |
	|         | addons-606058                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-606058          | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC |                     |
	|         | addons-606058                        |                        |         |         |                     |                     |
	| start   | -p addons-606058 --wait=true         | addons-606058          | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC | 28 Aug 24 17:51 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 17:48:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 17:48:15.661929  300953 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:48:15.662120  300953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:48:15.662134  300953 out.go:358] Setting ErrFile to fd 2...
	I0828 17:48:15.662140  300953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:48:15.662407  300953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	I0828 17:48:15.662878  300953 out.go:352] Setting JSON to false
	I0828 17:48:15.663827  300953 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5445,"bootTime":1724861851,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0828 17:48:15.663896  300953 start.go:139] virtualization:  
	I0828 17:48:15.666282  300953 out.go:177] * [addons-606058] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0828 17:48:15.669090  300953 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:48:15.669245  300953 notify.go:220] Checking for updates...
	I0828 17:48:15.673537  300953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:48:15.675516  300953 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	I0828 17:48:15.677218  300953 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	I0828 17:48:15.679022  300953 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0828 17:48:15.680739  300953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:48:15.682482  300953 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:48:15.702177  300953 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 17:48:15.702312  300953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 17:48:15.766975  300953 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-28 17:48:15.757351352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 17:48:15.767082  300953 docker.go:307] overlay module found
	I0828 17:48:15.770319  300953 out.go:177] * Using the docker driver based on user configuration
	I0828 17:48:15.771945  300953 start.go:297] selected driver: docker
	I0828 17:48:15.771962  300953 start.go:901] validating driver "docker" against <nil>
	I0828 17:48:15.771975  300953 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:48:15.772596  300953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 17:48:15.821974  300953 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-28 17:48:15.812939011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 17:48:15.822144  300953 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 17:48:15.822374  300953 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:48:15.824629  300953 out.go:177] * Using Docker driver with root privileges
	I0828 17:48:15.826470  300953 cni.go:84] Creating CNI manager for ""
	I0828 17:48:15.826496  300953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0828 17:48:15.826508  300953 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0828 17:48:15.826627  300953 start.go:340] cluster config:
	{Name:addons-606058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:48:15.828961  300953 out.go:177] * Starting "addons-606058" primary control-plane node in "addons-606058" cluster
	I0828 17:48:15.830912  300953 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0828 17:48:15.832815  300953 out.go:177] * Pulling base image v0.0.44-1724775115-19521 ...
	I0828 17:48:15.834545  300953 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0828 17:48:15.834616  300953 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0828 17:48:15.834628  300953 cache.go:56] Caching tarball of preloaded images
	I0828 17:48:15.834641  300953 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0828 17:48:15.834708  300953 preload.go:172] Found /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 17:48:15.834718  300953 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0828 17:48:15.835063  300953 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/config.json ...
	I0828 17:48:15.835094  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/config.json: {Name:mk9962814fecf81d0c42293d0d523d39f4ab744b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:15.850259  300953 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 17:48:15.850366  300953 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0828 17:48:15.850386  300953 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0828 17:48:15.850391  300953 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0828 17:48:15.850399  300953 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0828 17:48:15.850404  300953 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from local cache
	I0828 17:48:32.778730  300953 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from cached tarball
	I0828 17:48:32.778772  300953 cache.go:194] Successfully downloaded all kic artifacts
	I0828 17:48:32.778820  300953 start.go:360] acquireMachinesLock for addons-606058: {Name:mk4f41ddc3c2bf16c6e378a04d7059150e769164 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:48:32.778939  300953 start.go:364] duration metric: took 94.358µs to acquireMachinesLock for "addons-606058"
	I0828 17:48:32.778972  300953 start.go:93] Provisioning new machine with config: &{Name:addons-606058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606058 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0828 17:48:32.779054  300953 start.go:125] createHost starting for "" (driver="docker")
	I0828 17:48:32.781626  300953 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0828 17:48:32.781878  300953 start.go:159] libmachine.API.Create for "addons-606058" (driver="docker")
	I0828 17:48:32.781914  300953 client.go:168] LocalClient.Create starting
	I0828 17:48:32.782020  300953 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem
	I0828 17:48:33.213363  300953 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/cert.pem
	I0828 17:48:33.716901  300953 cli_runner.go:164] Run: docker network inspect addons-606058 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0828 17:48:33.735764  300953 cli_runner.go:211] docker network inspect addons-606058 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0828 17:48:33.735860  300953 network_create.go:284] running [docker network inspect addons-606058] to gather additional debugging logs...
	I0828 17:48:33.735883  300953 cli_runner.go:164] Run: docker network inspect addons-606058
	W0828 17:48:33.751054  300953 cli_runner.go:211] docker network inspect addons-606058 returned with exit code 1
	I0828 17:48:33.751087  300953 network_create.go:287] error running [docker network inspect addons-606058]: docker network inspect addons-606058: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-606058 not found
	I0828 17:48:33.751106  300953 network_create.go:289] output of [docker network inspect addons-606058]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-606058 not found
	
	** /stderr **
	I0828 17:48:33.751228  300953 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0828 17:48:33.767346  300953 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016fc9e0}
	I0828 17:48:33.767406  300953 network_create.go:124] attempt to create docker network addons-606058 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0828 17:48:33.767465  300953 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-606058 addons-606058
	I0828 17:48:33.835093  300953 network_create.go:108] docker network addons-606058 192.168.49.0/24 created
	I0828 17:48:33.835128  300953 kic.go:121] calculated static IP "192.168.49.2" for the "addons-606058" container
	I0828 17:48:33.835206  300953 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0828 17:48:33.849336  300953 cli_runner.go:164] Run: docker volume create addons-606058 --label name.minikube.sigs.k8s.io=addons-606058 --label created_by.minikube.sigs.k8s.io=true
	I0828 17:48:33.866998  300953 oci.go:103] Successfully created a docker volume addons-606058
	I0828 17:48:33.867104  300953 cli_runner.go:164] Run: docker run --rm --name addons-606058-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606058 --entrypoint /usr/bin/test -v addons-606058:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib
	I0828 17:48:35.906041  300953 cli_runner.go:217] Completed: docker run --rm --name addons-606058-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606058 --entrypoint /usr/bin/test -v addons-606058:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib: (2.038893296s)
	I0828 17:48:35.906077  300953 oci.go:107] Successfully prepared a docker volume addons-606058
	I0828 17:48:35.906092  300953 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0828 17:48:35.906112  300953 kic.go:194] Starting extracting preloaded images to volume ...
	I0828 17:48:35.906196  300953 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-606058:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir
	I0828 17:48:40.029719  300953 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-606058:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir: (4.123480484s)
	I0828 17:48:40.029752  300953 kic.go:203] duration metric: took 4.12363657s to extract preloaded images to volume ...
	W0828 17:48:40.029909  300953 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0828 17:48:40.030024  300953 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0828 17:48:40.100544  300953 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-606058 --name addons-606058 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606058 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-606058 --network addons-606058 --ip 192.168.49.2 --volume addons-606058:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce
	I0828 17:48:40.446830  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Running}}
	I0828 17:48:40.471050  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:48:40.498312  300953 cli_runner.go:164] Run: docker exec addons-606058 stat /var/lib/dpkg/alternatives/iptables
	I0828 17:48:40.554508  300953 oci.go:144] the created container "addons-606058" has a running status.
	I0828 17:48:40.554536  300953 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa...
	I0828 17:48:40.877576  300953 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0828 17:48:40.907930  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:48:40.931020  300953 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0828 17:48:40.931045  300953 kic_runner.go:114] Args: [docker exec --privileged addons-606058 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0828 17:48:41.033404  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:48:41.062090  300953 machine.go:93] provisionDockerMachine start ...
	I0828 17:48:41.062184  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:48:41.084140  300953 main.go:141] libmachine: Using SSH client type: native
	I0828 17:48:41.084405  300953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0828 17:48:41.084419  300953 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 17:48:41.266715  300953 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-606058
	
	I0828 17:48:41.266740  300953 ubuntu.go:169] provisioning hostname "addons-606058"
	I0828 17:48:41.266813  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:48:41.289912  300953 main.go:141] libmachine: Using SSH client type: native
	I0828 17:48:41.290152  300953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0828 17:48:41.290165  300953 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-606058 && echo "addons-606058" | sudo tee /etc/hostname
	I0828 17:48:41.461275  300953 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-606058
	
	I0828 17:48:41.461354  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:48:41.485659  300953 main.go:141] libmachine: Using SSH client type: native
	I0828 17:48:41.485916  300953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0828 17:48:41.485940  300953 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-606058' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-606058/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-606058' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:48:41.619233  300953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:48:41.619263  300953 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19529-294791/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-294791/.minikube}
	I0828 17:48:41.619293  300953 ubuntu.go:177] setting up certificates
	I0828 17:48:41.619306  300953 provision.go:84] configureAuth start
	I0828 17:48:41.619368  300953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606058
	I0828 17:48:41.635274  300953 provision.go:143] copyHostCerts
	I0828 17:48:41.635359  300953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-294791/.minikube/cert.pem (1123 bytes)
	I0828 17:48:41.635495  300953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-294791/.minikube/key.pem (1679 bytes)
	I0828 17:48:41.635571  300953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-294791/.minikube/ca.pem (1082 bytes)
	I0828 17:48:41.635629  300953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-294791/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca-key.pem org=jenkins.addons-606058 san=[127.0.0.1 192.168.49.2 addons-606058 localhost minikube]
	I0828 17:48:42.226419  300953 provision.go:177] copyRemoteCerts
	I0828 17:48:42.226501  300953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:48:42.226549  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:48:42.248856  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:48:42.344807  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0828 17:48:42.371267  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0828 17:48:42.397137  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 17:48:42.422156  300953 provision.go:87] duration metric: took 802.834933ms to configureAuth
	I0828 17:48:42.422183  300953 ubuntu.go:193] setting minikube options for container-runtime
	I0828 17:48:42.422375  300953 config.go:182] Loaded profile config "addons-606058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0828 17:48:42.422391  300953 machine.go:96] duration metric: took 1.36028234s to provisionDockerMachine
	I0828 17:48:42.422399  300953 client.go:171] duration metric: took 9.640474075s to LocalClient.Create
	I0828 17:48:42.422412  300953 start.go:167] duration metric: took 9.640536237s to libmachine.API.Create "addons-606058"
	I0828 17:48:42.422426  300953 start.go:293] postStartSetup for "addons-606058" (driver="docker")
	I0828 17:48:42.422436  300953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:48:42.422494  300953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:48:42.422547  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:48:42.438305  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:48:42.532423  300953 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:48:42.535440  300953 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0828 17:48:42.535475  300953 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0828 17:48:42.535497  300953 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0828 17:48:42.535506  300953 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0828 17:48:42.535520  300953 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-294791/.minikube/addons for local assets ...
	I0828 17:48:42.535587  300953 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-294791/.minikube/files for local assets ...
	I0828 17:48:42.535610  300953 start.go:296] duration metric: took 113.178484ms for postStartSetup
	I0828 17:48:42.535928  300953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606058
	I0828 17:48:42.550954  300953 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/config.json ...
	I0828 17:48:42.551232  300953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:48:42.551282  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:48:42.567598  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:48:42.660257  300953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0828 17:48:42.664671  300953 start.go:128] duration metric: took 9.885601287s to createHost
	I0828 17:48:42.664738  300953 start.go:83] releasing machines lock for "addons-606058", held for 9.885783628s
	I0828 17:48:42.664818  300953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606058
	I0828 17:48:42.680417  300953 ssh_runner.go:195] Run: cat /version.json
	I0828 17:48:42.680433  300953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:48:42.680475  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:48:42.680475  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:48:42.702434  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:48:42.702843  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:48:42.927140  300953 ssh_runner.go:195] Run: systemctl --version
	I0828 17:48:42.931766  300953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0828 17:48:42.936187  300953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0828 17:48:42.959658  300953 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0828 17:48:42.959734  300953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:48:42.987280  300953 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0828 17:48:42.987305  300953 start.go:495] detecting cgroup driver to use...
	I0828 17:48:42.987342  300953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0828 17:48:42.987422  300953 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0828 17:48:43.000001  300953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 17:48:43.019456  300953 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:48:43.019571  300953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:48:43.034236  300953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:48:43.052673  300953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:48:43.143107  300953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:48:43.237789  300953 docker.go:233] disabling docker service ...
	I0828 17:48:43.237893  300953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:48:43.256754  300953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:48:43.268796  300953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:48:43.359346  300953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:48:43.448434  300953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:48:43.459882  300953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:48:43.475356  300953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0828 17:48:43.484546  300953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0828 17:48:43.493651  300953 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0828 17:48:43.493735  300953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0828 17:48:43.502802  300953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 17:48:43.512326  300953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0828 17:48:43.521426  300953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 17:48:43.530742  300953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:48:43.539710  300953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0828 17:48:43.548634  300953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0828 17:48:43.557583  300953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0828 17:48:43.567144  300953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:48:43.575316  300953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:48:43.584226  300953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:48:43.681101  300953 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0828 17:48:43.803225  300953 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0828 17:48:43.803344  300953 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0828 17:48:43.806753  300953 start.go:563] Will wait 60s for crictl version
	I0828 17:48:43.806853  300953 ssh_runner.go:195] Run: which crictl
	I0828 17:48:43.810061  300953 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:48:43.844950  300953 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0828 17:48:43.845069  300953 ssh_runner.go:195] Run: containerd --version
	I0828 17:48:43.865530  300953 ssh_runner.go:195] Run: containerd --version
	I0828 17:48:43.888004  300953 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0828 17:48:43.890038  300953 cli_runner.go:164] Run: docker network inspect addons-606058 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0828 17:48:43.905938  300953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0828 17:48:43.909477  300953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:48:43.920095  300953 kubeadm.go:883] updating cluster {Name:addons-606058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 17:48:43.920219  300953 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0828 17:48:43.920283  300953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:48:43.959769  300953 containerd.go:627] all images are preloaded for containerd runtime.
	I0828 17:48:43.959791  300953 containerd.go:534] Images already preloaded, skipping extraction
	I0828 17:48:43.959853  300953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:48:43.996372  300953 containerd.go:627] all images are preloaded for containerd runtime.
	I0828 17:48:43.996396  300953 cache_images.go:84] Images are preloaded, skipping loading
	I0828 17:48:43.996405  300953 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0828 17:48:43.996508  300953 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-606058 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-606058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:48:43.996575  300953 ssh_runner.go:195] Run: sudo crictl info
	I0828 17:48:44.036108  300953 cni.go:84] Creating CNI manager for ""
	I0828 17:48:44.036136  300953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0828 17:48:44.036146  300953 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 17:48:44.036169  300953 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-606058 NodeName:addons-606058 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 17:48:44.036314  300953 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-606058"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 17:48:44.036406  300953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 17:48:44.045832  300953 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 17:48:44.045929  300953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 17:48:44.055150  300953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0828 17:48:44.073184  300953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:48:44.091730  300953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0828 17:48:44.110022  300953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0828 17:48:44.113497  300953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:48:44.124149  300953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:48:44.216761  300953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:48:44.230532  300953 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058 for IP: 192.168.49.2
	I0828 17:48:44.230554  300953 certs.go:194] generating shared ca certs ...
	I0828 17:48:44.230571  300953 certs.go:226] acquiring lock for ca certs: {Name:mke663c906ba93beaf12a5613882d3e46b93d46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:44.230691  300953 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-294791/.minikube/ca.key
	I0828 17:48:45.516091  300953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-294791/.minikube/ca.crt ...
	I0828 17:48:45.516127  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/ca.crt: {Name:mkd50fecf78ee072735e6ed8bbeaca83037d486c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:45.516369  300953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-294791/.minikube/ca.key ...
	I0828 17:48:45.516386  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/ca.key: {Name:mkc446a30de3b85a3b2bd12dfb35cf7658b122ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:45.517576  300953 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.key
	I0828 17:48:46.425841  300953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.crt ...
	I0828 17:48:46.425869  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.crt: {Name:mk6f6c9f8f7cabea0315ddd95e085e461d87d4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:46.426046  300953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.key ...
	I0828 17:48:46.426059  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.key: {Name:mka37fb33cd7f991e0118d1a007852c9374d7e3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:46.426145  300953 certs.go:256] generating profile certs ...
	I0828 17:48:46.426206  300953 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.key
	I0828 17:48:46.426226  300953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt with IP's: []
	I0828 17:48:46.827366  300953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt ...
	I0828 17:48:46.827403  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: {Name:mk78998033ac92996a462b952d46bc9823b49625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:46.827585  300953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.key ...
	I0828 17:48:46.827600  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.key: {Name:mk42ada07658224e7bb5c0ac04cb382af63b3e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:46.828325  300953 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.key.34728fe6
	I0828 17:48:46.828351  300953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.crt.34728fe6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0828 17:48:47.773413  300953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.crt.34728fe6 ...
	I0828 17:48:47.773451  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.crt.34728fe6: {Name:mk305e813116060ee227e4a91ff5881ccc353829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:47.773644  300953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.key.34728fe6 ...
	I0828 17:48:47.773661  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.key.34728fe6: {Name:mk50a9e2fc4969df77ed8f04270ff8b6cbfca3b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:47.773752  300953 certs.go:381] copying /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.crt.34728fe6 -> /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.crt
	I0828 17:48:47.773837  300953 certs.go:385] copying /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.key.34728fe6 -> /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.key
	I0828 17:48:47.773895  300953 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/proxy-client.key
	I0828 17:48:47.773918  300953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/proxy-client.crt with IP's: []
	I0828 17:48:48.570342  300953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/proxy-client.crt ...
	I0828 17:48:48.570377  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/proxy-client.crt: {Name:mk00e51deadd86c2efef561cc10b7145df107dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:48.570566  300953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/proxy-client.key ...
	I0828 17:48:48.570581  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/proxy-client.key: {Name:mk72293e93fa8c84d0ea6b712c8e850e574d63b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:48.571195  300953 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:48:48.571240  300953 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem (1082 bytes)
	I0828 17:48:48.571271  300953 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:48:48.571299  300953 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/key.pem (1679 bytes)
	I0828 17:48:48.571967  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:48:48.599692  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0828 17:48:48.625294  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:48:48.650427  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:48:48.676139  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0828 17:48:48.702405  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 17:48:48.727772  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:48:48.753678  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 17:48:48.777594  300953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:48:48.802142  300953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 17:48:48.819749  300953 ssh_runner.go:195] Run: openssl version
	I0828 17:48:48.825099  300953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:48:48.834553  300953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:48.837879  300953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:48.837974  300953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:48.844940  300953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:48:48.854237  300953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:48:48.857327  300953 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 17:48:48.857398  300953 kubeadm.go:392] StartCluster: {Name:addons-606058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-606058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:48:48.857495  300953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0828 17:48:48.857561  300953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 17:48:48.894253  300953 cri.go:89] found id: ""
	I0828 17:48:48.894322  300953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 17:48:48.903173  300953 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 17:48:48.911824  300953 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0828 17:48:48.911919  300953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 17:48:48.922713  300953 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 17:48:48.922735  300953 kubeadm.go:157] found existing configuration files:
	
	I0828 17:48:48.922808  300953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 17:48:48.931712  300953 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 17:48:48.931802  300953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 17:48:48.940259  300953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 17:48:48.949855  300953 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 17:48:48.949926  300953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 17:48:48.958551  300953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 17:48:48.967081  300953 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 17:48:48.967176  300953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 17:48:48.975734  300953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 17:48:48.984247  300953 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 17:48:48.984336  300953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 17:48:48.992760  300953 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0828 17:48:49.033687  300953 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 17:48:49.033774  300953 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 17:48:49.053930  300953 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0828 17:48:49.054045  300953 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0828 17:48:49.054125  300953 kubeadm.go:310] OS: Linux
	I0828 17:48:49.054193  300953 kubeadm.go:310] CGROUPS_CPU: enabled
	I0828 17:48:49.054266  300953 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0828 17:48:49.054341  300953 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0828 17:48:49.054410  300953 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0828 17:48:49.054476  300953 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0828 17:48:49.054545  300953 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0828 17:48:49.054619  300953 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0828 17:48:49.054685  300953 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0828 17:48:49.054749  300953 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0828 17:48:49.114724  300953 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 17:48:49.114890  300953 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 17:48:49.115010  300953 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 17:48:49.120992  300953 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 17:48:49.124929  300953 out.go:235]   - Generating certificates and keys ...
	I0828 17:48:49.125034  300953 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 17:48:49.125105  300953 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 17:48:49.654304  300953 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 17:48:50.137318  300953 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 17:48:50.666194  300953 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 17:48:51.145062  300953 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 17:48:51.268025  300953 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 17:48:51.268261  300953 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-606058 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0828 17:48:51.529983  300953 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 17:48:51.530243  300953 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-606058 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0828 17:48:52.197303  300953 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 17:48:52.867609  300953 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 17:48:53.273846  300953 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 17:48:53.274078  300953 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 17:48:53.669803  300953 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 17:48:54.416003  300953 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 17:48:54.903097  300953 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 17:48:55.381586  300953 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 17:48:55.932520  300953 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 17:48:55.933374  300953 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 17:48:55.936252  300953 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 17:48:55.938745  300953 out.go:235]   - Booting up control plane ...
	I0828 17:48:55.938856  300953 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 17:48:55.938938  300953 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 17:48:55.939974  300953 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 17:48:55.951983  300953 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 17:48:55.958108  300953 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 17:48:55.958452  300953 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 17:48:56.069321  300953 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 17:48:56.069448  300953 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 17:48:58.070396  300953 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001402869s
	I0828 17:48:58.070492  300953 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 17:49:05.571808  300953 kubeadm.go:310] [api-check] The API server is healthy after 7.501394211s
	I0828 17:49:05.591125  300953 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 17:49:05.604511  300953 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 17:49:05.626301  300953 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 17:49:05.626492  300953 kubeadm.go:310] [mark-control-plane] Marking the node addons-606058 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 17:49:05.650576  300953 kubeadm.go:310] [bootstrap-token] Using token: rc5yir.v1lskskoe71bqpba
	I0828 17:49:05.652413  300953 out.go:235]   - Configuring RBAC rules ...
	I0828 17:49:05.652536  300953 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 17:49:05.670148  300953 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 17:49:05.684001  300953 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 17:49:05.687659  300953 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 17:49:05.691112  300953 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 17:49:05.695023  300953 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 17:49:05.978860  300953 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 17:49:06.403428  300953 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 17:49:06.978753  300953 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 17:49:06.980021  300953 kubeadm.go:310] 
	I0828 17:49:06.980090  300953 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 17:49:06.980096  300953 kubeadm.go:310] 
	I0828 17:49:06.980170  300953 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 17:49:06.980175  300953 kubeadm.go:310] 
	I0828 17:49:06.980199  300953 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 17:49:06.980255  300953 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 17:49:06.980304  300953 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 17:49:06.980309  300953 kubeadm.go:310] 
	I0828 17:49:06.980360  300953 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 17:49:06.980365  300953 kubeadm.go:310] 
	I0828 17:49:06.980411  300953 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 17:49:06.980415  300953 kubeadm.go:310] 
	I0828 17:49:06.980468  300953 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 17:49:06.980540  300953 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 17:49:06.980605  300953 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 17:49:06.980610  300953 kubeadm.go:310] 
	I0828 17:49:06.980692  300953 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 17:49:06.980766  300953 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 17:49:06.980770  300953 kubeadm.go:310] 
	I0828 17:49:06.980852  300953 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rc5yir.v1lskskoe71bqpba \
	I0828 17:49:06.980950  300953 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ebdbb96f571584ee7b5643d73c7e5f5ab59e30b970ea1dcadc55b518b8df31ff \
	I0828 17:49:06.980970  300953 kubeadm.go:310] 	--control-plane 
	I0828 17:49:06.980975  300953 kubeadm.go:310] 
	I0828 17:49:06.981056  300953 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 17:49:06.981066  300953 kubeadm.go:310] 
	I0828 17:49:06.981144  300953 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rc5yir.v1lskskoe71bqpba \
	I0828 17:49:06.981242  300953 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ebdbb96f571584ee7b5643d73c7e5f5ab59e30b970ea1dcadc55b518b8df31ff 
	I0828 17:49:06.984212  300953 kubeadm.go:310] W0828 17:48:49.029908    1044 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 17:49:06.984505  300953 kubeadm.go:310] W0828 17:48:49.030920    1044 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 17:49:06.984721  300953 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0828 17:49:06.984828  300953 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 17:49:06.984852  300953 cni.go:84] Creating CNI manager for ""
	I0828 17:49:06.984866  300953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0828 17:49:06.986839  300953 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0828 17:49:06.988639  300953 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0828 17:49:06.992319  300953 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0828 17:49:06.992338  300953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0828 17:49:07.031038  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0828 17:49:07.328376  300953 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 17:49:07.328441  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:49:07.328511  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-606058 minikube.k8s.io/updated_at=2024_08_28T17_49_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=addons-606058 minikube.k8s.io/primary=true
	I0828 17:49:07.539598  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:49:07.539660  300953 ops.go:34] apiserver oom_adj: -16
	I0828 17:49:08.040616  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:49:08.540167  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:49:09.039812  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:49:09.539726  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:49:10.040389  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:49:10.539878  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:49:11.040419  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:49:11.540696  300953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:49:11.692823  300953 kubeadm.go:1113] duration metric: took 4.364442951s to wait for elevateKubeSystemPrivileges
	I0828 17:49:11.692860  300953 kubeadm.go:394] duration metric: took 22.835489094s to StartCluster
	I0828 17:49:11.692879  300953 settings.go:142] acquiring lock: {Name:mka844fbf5a951ef11587fd548e96fc1d30af8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:49:11.693002  300953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-294791/kubeconfig
	I0828 17:49:11.693386  300953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/kubeconfig: {Name:mkdafb119dde5c297a9c0a5213c3687bb184c63e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:49:11.693565  300953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0828 17:49:11.693601  300953 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0828 17:49:11.693853  300953 config.go:182] Loaded profile config "addons-606058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0828 17:49:11.693895  300953 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0828 17:49:11.693974  300953 addons.go:69] Setting yakd=true in profile "addons-606058"
	I0828 17:49:11.694001  300953 addons.go:234] Setting addon yakd=true in "addons-606058"
	I0828 17:49:11.694029  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.694472  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.694843  300953 addons.go:69] Setting inspektor-gadget=true in profile "addons-606058"
	I0828 17:49:11.694875  300953 addons.go:234] Setting addon inspektor-gadget=true in "addons-606058"
	I0828 17:49:11.694906  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.695341  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.695784  300953 addons.go:69] Setting metrics-server=true in profile "addons-606058"
	I0828 17:49:11.695818  300953 addons.go:234] Setting addon metrics-server=true in "addons-606058"
	I0828 17:49:11.695843  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.696231  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.699365  300953 addons.go:69] Setting cloud-spanner=true in profile "addons-606058"
	I0828 17:49:11.699468  300953 addons.go:234] Setting addon cloud-spanner=true in "addons-606058"
	I0828 17:49:11.699527  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.700001  300953 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-606058"
	I0828 17:49:11.700027  300953 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-606058"
	I0828 17:49:11.700048  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.700415  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.700687  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.701421  300953 addons.go:69] Setting default-storageclass=true in profile "addons-606058"
	I0828 17:49:11.701457  300953 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-606058"
	I0828 17:49:11.701699  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.706687  300953 addons.go:69] Setting registry=true in profile "addons-606058"
	I0828 17:49:11.706825  300953 addons.go:234] Setting addon registry=true in "addons-606058"
	I0828 17:49:11.706869  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.713767  300953 addons.go:69] Setting gcp-auth=true in profile "addons-606058"
	I0828 17:49:11.713831  300953 mustload.go:65] Loading cluster: addons-606058
	I0828 17:49:11.714011  300953 config.go:182] Loaded profile config "addons-606058": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0828 17:49:11.714266  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.718001  300953 addons.go:69] Setting storage-provisioner=true in profile "addons-606058"
	I0828 17:49:11.718085  300953 addons.go:234] Setting addon storage-provisioner=true in "addons-606058"
	I0828 17:49:11.718152  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.718639  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.724275  300953 addons.go:69] Setting ingress=true in profile "addons-606058"
	I0828 17:49:11.724331  300953 addons.go:234] Setting addon ingress=true in "addons-606058"
	I0828 17:49:11.724375  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.724852  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.729994  300953 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-606058"
	I0828 17:49:11.730132  300953 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-606058"
	I0828 17:49:11.730635  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.735072  300953 addons.go:69] Setting ingress-dns=true in profile "addons-606058"
	I0828 17:49:11.735141  300953 addons.go:234] Setting addon ingress-dns=true in "addons-606058"
	I0828 17:49:11.735197  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.735663  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.748306  300953 addons.go:69] Setting volcano=true in profile "addons-606058"
	I0828 17:49:11.748414  300953 addons.go:234] Setting addon volcano=true in "addons-606058"
	I0828 17:49:11.748489  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.749048  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.700837  300953 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-606058"
	I0828 17:49:11.751577  300953 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-606058"
	I0828 17:49:11.751619  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.752075  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.762379  300953 addons.go:69] Setting volumesnapshots=true in profile "addons-606058"
	I0828 17:49:11.762489  300953 addons.go:234] Setting addon volumesnapshots=true in "addons-606058"
	I0828 17:49:11.762536  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.762759  300953 out.go:177] * Verifying Kubernetes components...
	I0828 17:49:11.763227  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.787630  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.808043  300953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:49:11.810921  300953 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0828 17:49:11.812603  300953 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0828 17:49:11.812622  300953 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0828 17:49:11.812685  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:11.815748  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.830746  300953 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0828 17:49:11.830909  300953 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0828 17:49:11.831483  300953 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0828 17:49:11.833065  300953 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 17:49:11.835514  300953 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 17:49:11.835596  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:11.833198  300953 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 17:49:11.836313  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0828 17:49:11.836365  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:11.837599  300953 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0828 17:49:11.837662  300953 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0828 17:49:11.837756  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:11.853697  300953 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 17:49:11.855678  300953 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0828 17:49:11.859003  300953 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 17:49:11.861020  300953 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 17:49:11.861039  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0828 17:49:11.861114  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:11.916376  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:11.917284  300953 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0828 17:49:11.923486  300953 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0828 17:49:11.925758  300953 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0828 17:49:11.926185  300953 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0828 17:49:11.928646  300953 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0828 17:49:11.928669  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0828 17:49:11.928731  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:11.929443  300953 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0828 17:49:11.929460  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0828 17:49:11.929511  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:11.947112  300953 addons.go:234] Setting addon default-storageclass=true in "addons-606058"
	I0828 17:49:11.947190  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:11.947778  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:11.964732  300953 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0828 17:49:11.966415  300953 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0828 17:49:11.970194  300953 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0828 17:49:11.972030  300953 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0828 17:49:11.973844  300953 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0828 17:49:11.977090  300953 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0828 17:49:11.979459  300953 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0828 17:49:11.979544  300953 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 17:49:11.982334  300953 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0828 17:49:11.982507  300953 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 17:49:11.982521  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 17:49:11.982581  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:11.986950  300953 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0828 17:49:11.986973  300953 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0828 17:49:11.987045  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:12.015281  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.016597  300953 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-606058"
	I0828 17:49:12.016635  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:12.017049  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:12.037715  300953 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0828 17:49:12.043555  300953 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0828 17:49:12.043591  300953 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0828 17:49:12.043665  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:12.049912  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.107873  300953 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 17:49:12.107896  300953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 17:49:12.107963  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:12.131440  300953 out.go:177]   - Using image docker.io/registry:2.8.3
	I0828 17:49:12.135089  300953 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0828 17:49:12.137179  300953 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0828 17:49:12.137202  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0828 17:49:12.137269  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:12.159578  300953 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0828 17:49:12.161859  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.162499  300953 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 17:49:12.162515  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0828 17:49:12.162570  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:12.182321  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.194382  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.227510  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.230124  300953 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0828 17:49:12.231898  300953 out.go:177]   - Using image docker.io/busybox:stable
	I0828 17:49:12.238503  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.238971  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.240502  300953 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 17:49:12.240524  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0828 17:49:12.240589  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:12.250738  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.268576  300953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0828 17:49:12.268692  300953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:49:12.294299  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.309198  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.309877  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	W0828 17:49:12.313737  300953 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0828 17:49:12.313768  300953 retry.go:31] will retry after 129.388073ms: ssh: handshake failed: EOF
	I0828 17:49:12.319588  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:12.651026  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 17:49:12.705920  300953 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 17:49:12.705944  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0828 17:49:12.754187  300953 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0828 17:49:12.754212  300953 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0828 17:49:12.768143  300953 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0828 17:49:12.768170  300953 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0828 17:49:12.845633  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 17:49:12.908587  300953 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 17:49:12.908658  300953 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 17:49:12.947748  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0828 17:49:12.959602  300953 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0828 17:49:12.959628  300953 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0828 17:49:12.961326  300953 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0828 17:49:12.961347  300953 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0828 17:49:13.002789  300953 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0828 17:49:13.002825  300953 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0828 17:49:13.112578  300953 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0828 17:49:13.112606  300953 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0828 17:49:13.143127  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 17:49:13.164425  300953 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0828 17:49:13.164451  300953 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0828 17:49:13.173650  300953 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0828 17:49:13.173677  300953 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0828 17:49:13.183933  300953 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0828 17:49:13.183960  300953 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0828 17:49:13.190778  300953 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 17:49:13.190806  300953 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 17:49:13.227875  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 17:49:13.234971  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 17:49:13.277852  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0828 17:49:13.283179  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 17:49:13.290265  300953 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0828 17:49:13.290338  300953 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0828 17:49:13.336491  300953 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0828 17:49:13.336558  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0828 17:49:13.408162  300953 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0828 17:49:13.408220  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0828 17:49:13.474373  300953 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0828 17:49:13.474452  300953 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0828 17:49:13.528829  300953 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0828 17:49:13.528900  300953 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0828 17:49:13.535765  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 17:49:13.613418  300953 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0828 17:49:13.613493  300953 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0828 17:49:13.690790  300953 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0828 17:49:13.690873  300953 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0828 17:49:13.766101  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0828 17:49:13.806860  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0828 17:49:13.840456  300953 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0828 17:49:13.840543  300953 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0828 17:49:13.949112  300953 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0828 17:49:13.949193  300953 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0828 17:49:14.112101  300953 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0828 17:49:14.112173  300953 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0828 17:49:14.197702  300953 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 17:49:14.197774  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0828 17:49:14.469305  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.818245882s)
	I0828 17:49:14.469439  300953 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.200838551s)
	I0828 17:49:14.469548  300953 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.200530598s)
	I0828 17:49:14.469533  300953 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0828 17:49:14.470598  300953 node_ready.go:35] waiting up to 6m0s for node "addons-606058" to be "Ready" ...
	I0828 17:49:14.475115  300953 node_ready.go:49] node "addons-606058" has status "Ready":"True"
	I0828 17:49:14.475144  300953 node_ready.go:38] duration metric: took 4.492642ms for node "addons-606058" to be "Ready" ...
	I0828 17:49:14.475155  300953 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:49:14.484330  300953 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-b4l47" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:14.610737  300953 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0828 17:49:14.610762  300953 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0828 17:49:14.740749  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 17:49:14.829916  300953 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 17:49:14.829985  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0828 17:49:14.948827  300953 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0828 17:49:14.948900  300953 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0828 17:49:14.973857  300953 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-606058" context rescaled to 1 replicas
	I0828 17:49:15.050686  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 17:49:15.269546  300953 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0828 17:49:15.269619  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0828 17:49:15.643929  300953 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0828 17:49:15.644004  300953 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0828 17:49:15.921698  300953 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0828 17:49:15.921770  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0828 17:49:15.987439  300953 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-b4l47" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-b4l47" not found
	I0828 17:49:15.987510  300953 pod_ready.go:82] duration metric: took 1.503139076s for pod "coredns-6f6b679f8f-b4l47" in "kube-system" namespace to be "Ready" ...
	E0828 17:49:15.987537  300953 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-b4l47" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-b4l47" not found
	I0828 17:49:15.987559  300953 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:16.659523  300953 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0828 17:49:16.659594  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0828 17:49:16.939665  300953 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 17:49:16.939740  300953 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0828 17:49:17.318173  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 17:49:17.995721  300953 pod_ready.go:103] pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace has status "Ready":"False"
	I0828 17:49:19.042331  300953 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0828 17:49:19.042522  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:19.076594  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:19.515810  300953 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0828 17:49:19.618956  300953 addons.go:234] Setting addon gcp-auth=true in "addons-606058"
	I0828 17:49:19.619010  300953 host.go:66] Checking if "addons-606058" exists ...
	I0828 17:49:19.619499  300953 cli_runner.go:164] Run: docker container inspect addons-606058 --format={{.State.Status}}
	I0828 17:49:19.648545  300953 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0828 17:49:19.648612  300953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606058
	I0828 17:49:19.679728  300953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/addons-606058/id_rsa Username:docker}
	I0828 17:49:19.978557  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.132893607s)
	I0828 17:49:19.978598  300953 addons.go:475] Verifying addon ingress=true in "addons-606058"
	I0828 17:49:19.978856  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.031081576s)
	I0828 17:49:19.978924  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.835772581s)
	I0828 17:49:19.978960  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.751023412s)
	I0828 17:49:19.978981  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.743946396s)
	I0828 17:49:19.980457  300953 out.go:177] * Verifying ingress addon...
	I0828 17:49:19.983065  300953 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0828 17:49:19.990338  300953 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0828 17:49:19.990413  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:20.001551  300953 pod_ready.go:103] pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace has status "Ready":"False"
	I0828 17:49:20.487910  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:21.066497  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:21.538313  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:21.994383  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:22.021826  300953 pod_ready.go:103] pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace has status "Ready":"False"
	I0828 17:49:22.245477  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.709632827s)
	I0828 17:49:22.245511  300953 addons.go:475] Verifying addon metrics-server=true in "addons-606058"
	I0828 17:49:22.245546  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.479376615s)
	I0828 17:49:22.245555  300953 addons.go:475] Verifying addon registry=true in "addons-606058"
	I0828 17:49:22.245701  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.962146466s)
	I0828 17:49:22.246006  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.968084682s)
	I0828 17:49:22.246013  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.43907307s)
	I0828 17:49:22.246096  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.505271082s)
	W0828 17:49:22.246138  300953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 17:49:22.246160  300953 retry.go:31] will retry after 357.788788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 17:49:22.246256  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.195494577s)
	I0828 17:49:22.247815  300953 out.go:177] * Verifying registry addon...
	I0828 17:49:22.249347  300953 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-606058 service yakd-dashboard -n yakd-dashboard
	
	I0828 17:49:22.251507  300953 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0828 17:49:22.269477  300953 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0828 17:49:22.269505  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:22.526297  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:22.604499  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 17:49:22.773833  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:22.909285  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.591019357s)
	I0828 17:49:22.909374  300953 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-606058"
	I0828 17:49:22.909591  300953 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.26102126s)
	I0828 17:49:22.912534  300953 out.go:177] * Verifying csi-hostpath-driver addon...
	I0828 17:49:22.912541  300953 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 17:49:22.919212  300953 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0828 17:49:22.923336  300953 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0828 17:49:22.926500  300953 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0828 17:49:22.926522  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:22.928469  300953 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0828 17:49:22.928506  300953 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0828 17:49:22.988316  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:22.988813  300953 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0828 17:49:22.988835  300953 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0828 17:49:23.058974  300953 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 17:49:23.058998  300953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0828 17:49:23.105992  300953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 17:49:23.256100  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:23.433371  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:23.533372  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:23.755789  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:23.924866  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:23.987841  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:24.248042  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.643448704s)
	I0828 17:49:24.267632  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:24.296608  300953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.190572063s)
	I0828 17:49:24.300108  300953 addons.go:475] Verifying addon gcp-auth=true in "addons-606058"
	I0828 17:49:24.303019  300953 out.go:177] * Verifying gcp-auth addon...
	I0828 17:49:24.306141  300953 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0828 17:49:24.358280  300953 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 17:49:24.423754  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:24.487888  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:24.493968  300953 pod_ready.go:103] pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace has status "Ready":"False"
	I0828 17:49:24.756147  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:24.924813  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:24.988374  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:25.256418  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:25.424487  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:25.487807  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:25.755916  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:25.924783  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:26.026492  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:26.256483  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:26.426361  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:26.505760  300953 pod_ready.go:103] pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace has status "Ready":"False"
	I0828 17:49:26.537684  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:26.762443  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:26.924542  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:26.987871  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:27.256200  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:27.425018  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:27.487994  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:27.756547  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:27.924661  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:27.987544  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:28.258027  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:28.428917  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:28.488072  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:28.755726  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:28.924193  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:28.987881  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:28.993455  300953 pod_ready.go:103] pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace has status "Ready":"False"
	I0828 17:49:29.254980  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:29.425538  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:29.487900  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:29.755977  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:29.924529  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:29.987847  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:30.272727  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:30.423756  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:30.492084  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:30.755997  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:30.924292  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:30.988303  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:30.993829  300953 pod_ready.go:103] pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace has status "Ready":"False"
	I0828 17:49:31.255406  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:31.424021  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:31.487970  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:31.756865  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:31.924686  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:31.988279  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:32.255614  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:32.424716  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:32.488144  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:32.756011  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:32.924269  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:32.987605  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:33.255048  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:33.424255  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:33.488208  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:33.494561  300953 pod_ready.go:103] pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace has status "Ready":"False"
	I0828 17:49:33.756762  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:33.924690  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:33.988067  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:34.256585  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:34.459131  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:34.487802  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:34.756048  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:34.924629  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:34.987818  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:35.255836  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:35.424994  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:35.494764  300953 pod_ready.go:103] pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace has status "Ready":"False"
	I0828 17:49:35.525096  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:35.755643  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:35.924127  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:35.987996  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:36.256066  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:36.425565  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:36.488973  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:36.755808  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:36.925478  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:37.026386  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:37.256212  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:37.426034  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:37.490098  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:37.496518  300953 pod_ready.go:103] pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace has status "Ready":"False"
	I0828 17:49:37.756882  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:37.923732  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:37.987798  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:38.256226  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:38.424475  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:38.487777  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:38.755997  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:38.928684  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:38.988768  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:38.996749  300953 pod_ready.go:93] pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace has status "Ready":"True"
	I0828 17:49:38.996775  300953 pod_ready.go:82] duration metric: took 23.009187778s for pod "coredns-6f6b679f8f-p9qlh" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:38.996788  300953 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-606058" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.003782  300953 pod_ready.go:93] pod "etcd-addons-606058" in "kube-system" namespace has status "Ready":"True"
	I0828 17:49:39.003817  300953 pod_ready.go:82] duration metric: took 7.020935ms for pod "etcd-addons-606058" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.003837  300953 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-606058" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.012257  300953 pod_ready.go:93] pod "kube-apiserver-addons-606058" in "kube-system" namespace has status "Ready":"True"
	I0828 17:49:39.012295  300953 pod_ready.go:82] duration metric: took 8.448711ms for pod "kube-apiserver-addons-606058" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.012308  300953 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-606058" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.027906  300953 pod_ready.go:93] pod "kube-controller-manager-addons-606058" in "kube-system" namespace has status "Ready":"True"
	I0828 17:49:39.027938  300953 pod_ready.go:82] duration metric: took 15.622827ms for pod "kube-controller-manager-addons-606058" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.027953  300953 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9d9sc" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.035504  300953 pod_ready.go:93] pod "kube-proxy-9d9sc" in "kube-system" namespace has status "Ready":"True"
	I0828 17:49:39.035535  300953 pod_ready.go:82] duration metric: took 7.506488ms for pod "kube-proxy-9d9sc" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.035548  300953 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-606058" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.257339  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:39.392817  300953 pod_ready.go:93] pod "kube-scheduler-addons-606058" in "kube-system" namespace has status "Ready":"True"
	I0828 17:49:39.392842  300953 pod_ready.go:82] duration metric: took 357.286669ms for pod "kube-scheduler-addons-606058" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.392855  300953 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gvhjt" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.425377  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:39.488506  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:39.756509  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:39.792621  300953 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-gvhjt" in "kube-system" namespace has status "Ready":"True"
	I0828 17:49:39.792654  300953 pod_ready.go:82] duration metric: took 399.791592ms for pod "nvidia-device-plugin-daemonset-gvhjt" in "kube-system" namespace to be "Ready" ...
	I0828 17:49:39.792666  300953 pod_ready.go:39] duration metric: took 25.31749901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:49:39.792681  300953 api_server.go:52] waiting for apiserver process to appear ...
	I0828 17:49:39.792757  300953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:49:39.807993  300953 api_server.go:72] duration metric: took 28.11436303s to wait for apiserver process to appear ...
	I0828 17:49:39.808028  300953 api_server.go:88] waiting for apiserver healthz status ...
	I0828 17:49:39.808047  300953 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0828 17:49:39.817481  300953 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0828 17:49:39.818530  300953 api_server.go:141] control plane version: v1.31.0
	I0828 17:49:39.818555  300953 api_server.go:131] duration metric: took 10.520144ms to wait for apiserver health ...
	I0828 17:49:39.818564  300953 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 17:49:39.924389  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:39.987780  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:40.000605  300953 system_pods.go:59] 18 kube-system pods found
	I0828 17:49:40.000663  300953 system_pods.go:61] "coredns-6f6b679f8f-p9qlh" [6b94cdae-a69f-4822-9edb-8a70bb608e0c] Running
	I0828 17:49:40.000675  300953 system_pods.go:61] "csi-hostpath-attacher-0" [d1da15b2-47e1-4352-8f6c-03a70a96147c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 17:49:40.000688  300953 system_pods.go:61] "csi-hostpath-resizer-0" [7035d70b-7337-4486-b2e5-f216e56d7354] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 17:49:40.000701  300953 system_pods.go:61] "csi-hostpathplugin-vj7zr" [6edb235e-a6bb-4337-b61a-e306c569bca8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 17:49:40.000711  300953 system_pods.go:61] "etcd-addons-606058" [e16ce8d0-f011-4a5e-b359-b03d95d33c95] Running
	I0828 17:49:40.000717  300953 system_pods.go:61] "kindnet-rs9hc" [5fd12f47-b1aa-41a4-a1ef-0ad020bc0967] Running
	I0828 17:49:40.000721  300953 system_pods.go:61] "kube-apiserver-addons-606058" [50ec6f4e-55f1-4aa7-af89-6de599323909] Running
	I0828 17:49:40.000726  300953 system_pods.go:61] "kube-controller-manager-addons-606058" [27cc4dc8-fd3d-4c20-b464-b4b609ae93ae] Running
	I0828 17:49:40.000736  300953 system_pods.go:61] "kube-ingress-dns-minikube" [b0c54fba-444f-458f-896b-801671bc78c3] Running
	I0828 17:49:40.000740  300953 system_pods.go:61] "kube-proxy-9d9sc" [5cffa2cf-7756-4bc0-82ba-73c89fcbd5fc] Running
	I0828 17:49:40.000744  300953 system_pods.go:61] "kube-scheduler-addons-606058" [dc56bd52-3cf0-4645-8c6c-20567ed62fa5] Running
	I0828 17:49:40.000751  300953 system_pods.go:61] "metrics-server-84c5f94fbc-6724d" [dfa449a9-0492-4e3c-8e8c-a7325a127ba0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 17:49:40.000760  300953 system_pods.go:61] "nvidia-device-plugin-daemonset-gvhjt" [51a2fbcb-34cf-48c0-bcb5-bf6371120839] Running
	I0828 17:49:40.000768  300953 system_pods.go:61] "registry-6fb4cdfc84-qgmt4" [50643f06-10a7-469b-a36a-3c6496036a8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0828 17:49:40.000774  300953 system_pods.go:61] "registry-proxy-mjx8k" [064f11d9-7ab2-407b-9cef-5c27002ca5e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 17:49:40.000784  300953 system_pods.go:61] "snapshot-controller-56fcc65765-b85q6" [4786a9bd-caba-496d-a1fd-f578c7d65c23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 17:49:40.000793  300953 system_pods.go:61] "snapshot-controller-56fcc65765-zlsdt" [ff395c47-e0be-4990-abcc-abdbb6d16397] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 17:49:40.000801  300953 system_pods.go:61] "storage-provisioner" [a664bf09-8e5d-4ba0-8ee3-a76454810ef9] Running
	I0828 17:49:40.000810  300953 system_pods.go:74] duration metric: took 182.238688ms to wait for pod list to return data ...
	I0828 17:49:40.000823  300953 default_sa.go:34] waiting for default service account to be created ...
	I0828 17:49:40.191861  300953 default_sa.go:45] found service account: "default"
	I0828 17:49:40.191886  300953 default_sa.go:55] duration metric: took 191.056596ms for default service account to be created ...
	I0828 17:49:40.191897  300953 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 17:49:40.257988  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:40.400612  300953 system_pods.go:86] 18 kube-system pods found
	I0828 17:49:40.400650  300953 system_pods.go:89] "coredns-6f6b679f8f-p9qlh" [6b94cdae-a69f-4822-9edb-8a70bb608e0c] Running
	I0828 17:49:40.400662  300953 system_pods.go:89] "csi-hostpath-attacher-0" [d1da15b2-47e1-4352-8f6c-03a70a96147c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 17:49:40.400672  300953 system_pods.go:89] "csi-hostpath-resizer-0" [7035d70b-7337-4486-b2e5-f216e56d7354] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 17:49:40.400681  300953 system_pods.go:89] "csi-hostpathplugin-vj7zr" [6edb235e-a6bb-4337-b61a-e306c569bca8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 17:49:40.400686  300953 system_pods.go:89] "etcd-addons-606058" [e16ce8d0-f011-4a5e-b359-b03d95d33c95] Running
	I0828 17:49:40.400692  300953 system_pods.go:89] "kindnet-rs9hc" [5fd12f47-b1aa-41a4-a1ef-0ad020bc0967] Running
	I0828 17:49:40.400697  300953 system_pods.go:89] "kube-apiserver-addons-606058" [50ec6f4e-55f1-4aa7-af89-6de599323909] Running
	I0828 17:49:40.400708  300953 system_pods.go:89] "kube-controller-manager-addons-606058" [27cc4dc8-fd3d-4c20-b464-b4b609ae93ae] Running
	I0828 17:49:40.400714  300953 system_pods.go:89] "kube-ingress-dns-minikube" [b0c54fba-444f-458f-896b-801671bc78c3] Running
	I0828 17:49:40.400726  300953 system_pods.go:89] "kube-proxy-9d9sc" [5cffa2cf-7756-4bc0-82ba-73c89fcbd5fc] Running
	I0828 17:49:40.400733  300953 system_pods.go:89] "kube-scheduler-addons-606058" [dc56bd52-3cf0-4645-8c6c-20567ed62fa5] Running
	I0828 17:49:40.400740  300953 system_pods.go:89] "metrics-server-84c5f94fbc-6724d" [dfa449a9-0492-4e3c-8e8c-a7325a127ba0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 17:49:40.400749  300953 system_pods.go:89] "nvidia-device-plugin-daemonset-gvhjt" [51a2fbcb-34cf-48c0-bcb5-bf6371120839] Running
	I0828 17:49:40.400757  300953 system_pods.go:89] "registry-6fb4cdfc84-qgmt4" [50643f06-10a7-469b-a36a-3c6496036a8b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0828 17:49:40.400765  300953 system_pods.go:89] "registry-proxy-mjx8k" [064f11d9-7ab2-407b-9cef-5c27002ca5e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 17:49:40.400776  300953 system_pods.go:89] "snapshot-controller-56fcc65765-b85q6" [4786a9bd-caba-496d-a1fd-f578c7d65c23] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 17:49:40.400784  300953 system_pods.go:89] "snapshot-controller-56fcc65765-zlsdt" [ff395c47-e0be-4990-abcc-abdbb6d16397] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 17:49:40.400789  300953 system_pods.go:89] "storage-provisioner" [a664bf09-8e5d-4ba0-8ee3-a76454810ef9] Running
	I0828 17:49:40.400802  300953 system_pods.go:126] duration metric: took 208.897465ms to wait for k8s-apps to be running ...
	I0828 17:49:40.400815  300953 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 17:49:40.400873  300953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:49:40.415647  300953 system_svc.go:56] duration metric: took 14.822379ms WaitForService to wait for kubelet
	I0828 17:49:40.415725  300953 kubeadm.go:582] duration metric: took 28.722092903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:49:40.415762  300953 node_conditions.go:102] verifying NodePressure condition ...
	I0828 17:49:40.425115  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:40.488799  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:40.592106  300953 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0828 17:49:40.592138  300953 node_conditions.go:123] node cpu capacity is 2
	I0828 17:49:40.592151  300953 node_conditions.go:105] duration metric: took 176.353888ms to run NodePressure ...
	I0828 17:49:40.592176  300953 start.go:241] waiting for startup goroutines ...
	I0828 17:49:40.764518  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:40.957155  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:41.008586  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:41.256620  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:41.425817  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:41.488510  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:41.758863  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:41.923958  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:41.987475  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:42.257581  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:42.428083  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:42.488036  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:42.757166  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:42.925134  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:42.987535  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:43.255172  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:43.424469  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:43.487485  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:43.755876  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:43.924198  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:43.987707  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:44.257374  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:44.424807  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:44.487798  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:44.756921  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:44.924688  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:44.987590  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:45.257378  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:45.425545  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:45.487593  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:45.756680  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:45.924788  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:45.987447  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:46.255801  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:46.457753  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:46.489582  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:46.763744  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:46.924466  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:47.012924  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:47.255208  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:47.426048  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:47.488940  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:47.756320  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:47.924754  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:47.987618  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:48.255612  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:48.424327  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:48.488070  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:48.757948  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:48.923934  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:48.989617  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:49.256427  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:49.424420  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:49.488002  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:49.759676  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:49.924930  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:50.028381  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:50.255893  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 17:49:50.429386  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:50.488485  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:50.775084  300953 kapi.go:107] duration metric: took 28.523573988s to wait for kubernetes.io/minikube-addons=registry ...
	I0828 17:49:50.926605  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:50.987674  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:51.427049  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:51.524431  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:51.933804  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:51.988288  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:52.424225  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:52.487336  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:52.924418  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:52.995684  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:53.424857  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:53.488710  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:53.924444  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:53.987305  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:54.424254  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:54.488156  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:54.924890  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:55.006377  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:55.425142  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:55.488116  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:55.923914  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:55.987945  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:56.425725  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:56.489635  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:56.923991  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:56.988212  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:57.424800  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:57.487600  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:57.924617  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:57.987946  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:58.423868  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:58.488094  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:58.925345  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:58.988193  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:59.424391  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:59.487593  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:49:59.927879  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:49:59.988628  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:00.434248  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:00.489252  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:00.924807  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:00.988235  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:01.434696  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:01.555602  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:01.926288  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:01.988383  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:02.424475  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:02.488208  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:02.925093  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:03.034439  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:03.424263  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:03.488150  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:03.924430  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:03.987679  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:04.425106  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:04.488166  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:04.924086  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:04.987255  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:05.423939  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:05.488429  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:05.924063  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:05.987896  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:06.432638  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:06.525407  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:06.923599  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:06.987680  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:07.423749  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:07.488284  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:07.924581  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:08.025097  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:08.425028  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:08.487977  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:08.923633  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:08.988982  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:09.423780  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:09.487553  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:09.923738  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:09.987762  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:10.426949  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:10.533729  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:10.924327  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:10.987737  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:11.424755  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:11.487838  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:11.924628  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:11.991490  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:12.425527  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:12.488023  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:12.924447  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 17:50:13.025056  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:13.423947  300953 kapi.go:107] duration metric: took 50.5047329s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0828 17:50:13.487656  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:13.987619  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:14.487895  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:14.987161  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:15.487639  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:15.987532  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:16.488069  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:16.988094  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:17.487731  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:17.987547  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:18.488111  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:18.987296  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:19.487492  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:19.987830  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:20.487879  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:20.987083  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:21.487951  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:21.987847  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:22.487282  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:22.987465  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:23.487952  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:23.987909  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:24.488011  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:24.987681  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:25.488280  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:25.987724  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:26.489086  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:26.987446  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:27.488690  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:27.989792  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:28.487436  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:28.988446  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:29.488228  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:29.988613  300953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 17:50:30.486939  300953 kapi.go:107] duration metric: took 1m10.503870925s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0828 17:50:47.310629  300953 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 17:50:47.310650  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:47.810542  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:48.310062  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:48.810222  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:49.309551  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:49.810156  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:50.309502  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:50.810678  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:51.309887  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:51.810206  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:52.309469  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:52.810574  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:53.310532  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:53.810162  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:54.309745  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:54.810244  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:55.310139  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:55.810489  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:56.310483  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:56.809933  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:57.309884  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:57.809428  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:58.310420  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:58.810221  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:59.310105  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:50:59.809629  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:00.321427  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:00.810426  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:01.309731  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:01.809886  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:02.309751  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:02.809690  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:03.309812  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:03.814505  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:04.310191  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:04.810166  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:05.309256  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:05.810558  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:06.310142  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:06.810294  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:07.310173  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:07.809546  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:08.310823  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:08.809823  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:09.310224  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:09.809630  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:10.309972  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:10.809992  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:11.310422  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:11.810093  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:12.309681  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:12.810347  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:13.310158  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:13.809794  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:14.309383  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:14.809877  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:15.309621  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:15.809735  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:16.309901  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:16.810366  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:17.309176  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:17.809725  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:18.309723  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:18.810296  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:19.309633  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:19.809724  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:20.309576  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:20.825978  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:21.310227  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:21.810832  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:22.309109  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:22.809641  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:23.310850  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:23.809899  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:24.310614  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:24.809783  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:25.309395  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:25.810340  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:26.309689  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:26.809750  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:27.314393  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:27.811039  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:28.309790  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:28.809941  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:29.313930  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:29.809908  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:30.310284  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:30.810297  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:31.310418  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:31.810414  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:32.310083  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:32.809270  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:33.310290  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:33.810908  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:34.309174  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:34.810200  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:35.309595  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:35.810020  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:36.310141  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:36.809443  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:37.310333  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:37.809357  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:38.309885  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:38.810497  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:39.310409  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:39.810652  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:40.309503  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:40.810371  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:41.310442  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:41.809884  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:42.310349  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:42.810232  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:43.309701  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:43.809935  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:44.309605  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:44.809284  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:45.312498  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:45.809473  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:46.309996  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:46.809671  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:47.309617  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:47.810279  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:48.310018  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:48.810048  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:49.310221  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:49.809888  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:50.309823  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:50.809511  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:51.310548  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:51.810024  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:52.310284  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:52.810090  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:53.310924  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:53.809587  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:54.309933  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:54.809884  300953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 17:51:55.324357  300953 kapi.go:107] duration metric: took 2m31.018214709s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0828 17:51:55.326453  300953 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-606058 cluster.
	I0828 17:51:55.328829  300953 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0828 17:51:55.330768  300953 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0828 17:51:55.332801  300953 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, metrics-server, volcano, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0828 17:51:55.334620  300953 addons.go:510] duration metric: took 2m43.640718921s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns default-storageclass metrics-server volcano inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0828 17:51:55.334669  300953 start.go:246] waiting for cluster config update ...
	I0828 17:51:55.334692  300953 start.go:255] writing updated cluster config ...
	I0828 17:51:55.334991  300953 ssh_runner.go:195] Run: rm -f paused
	I0828 17:51:55.717013  300953 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 17:51:55.719159  300953 out.go:177] * Done! kubectl is now configured to use "addons-606058" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	1faf6e3c3fb51       e2d3313f65753       2 minutes ago       Exited              gadget                                   5                   86976d3ec8afe       gadget-snx4z
	09499aa13fd50       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   940ecab6484b6       gcp-auth-89d5ffd79-bp25j
	aab8612fb956a       8b46b1cd48760       4 minutes ago       Running             admission                                0                   7d1beb438add4       volcano-admission-77d7d48b68-phvl2
	de4353fc468cf       289a818c8d9c5       4 minutes ago       Running             controller                               0                   7e696e6be9e0d       ingress-nginx-controller-bc57996ff-xd7zj
	079f8cb4357f6       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   a3128b7fdb3d0       csi-hostpathplugin-vj7zr
	b39343c6ebacf       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   a3128b7fdb3d0       csi-hostpathplugin-vj7zr
	b45f54a21cd47       420193b27261a       5 minutes ago       Exited              patch                                    2                   43c3a70132e5e       ingress-nginx-admission-patch-pj42r
	6b292f8da9e6a       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   a3128b7fdb3d0       csi-hostpathplugin-vj7zr
	65af315667257       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   a3128b7fdb3d0       csi-hostpathplugin-vj7zr
	88267d3c2a039       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   a3128b7fdb3d0       csi-hostpathplugin-vj7zr
	05973f6cb6989       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   b85d3a0e0cc3f       csi-hostpath-attacher-0
	0d1bc7183f8b1       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   a3128b7fdb3d0       csi-hostpathplugin-vj7zr
	7c98202b18c3f       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   881523711890f       volcano-controllers-56675bb4d5-l985r
	79fe6ea48d9e0       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   1fbe6d42648a7       volcano-scheduler-576bc46687-g52qk
	48be797894322       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   c7da08c848b38       csi-hostpath-resizer-0
	604847db2069f       420193b27261a       5 minutes ago       Exited              create                                   0                   4ed3baf3f697d       ingress-nginx-admission-create-vvgj8
	843b9b69f1748       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   cd093295e3fde       snapshot-controller-56fcc65765-zlsdt
	53194bee74f7c       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   2ffab575312ad       local-path-provisioner-86d989889c-jk6wv
	ee8f5a02b7ca4       6fed88f43b276       5 minutes ago       Running             registry                                 0                   3bb7012a053c9       registry-6fb4cdfc84-qgmt4
	144de57189543       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   765e0e247717e       metrics-server-84c5f94fbc-6724d
	954f4e8f9949d       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   285a7d6dc6033       snapshot-controller-56fcc65765-b85q6
	aabeaea3e6a1b       77bdba588b953       5 minutes ago       Running             yakd                                     0                   3cc7202b10ded       yakd-dashboard-67d98fc6b-lxv5t
	aabbf97d38f6a       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   c6658ef61f036       registry-proxy-mjx8k
	9c28faff36414       2437cf7621777       5 minutes ago       Running             coredns                                  0                   e9631053e5961       coredns-6f6b679f8f-p9qlh
	c36db598c302d       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   175165e1ef960       cloud-spanner-emulator-769b77f747-hxrwq
	e24eda3e360c6       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   8c8e8f2a6c48a       nvidia-device-plugin-daemonset-gvhjt
	53954971ba354       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   8b2715d088f08       kube-ingress-dns-minikube
	d19b76dd6a724       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   83c98ac3c5c86       storage-provisioner
	3d1909867035f       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   6a6427bdb2879       kindnet-rs9hc
	3e92639301efb       71d55d66fd4ee       6 minutes ago       Running             kube-proxy                               0                   658ea788529f2       kube-proxy-9d9sc
	ee7f582b2645b       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   ed4b6c42b3f96       kube-apiserver-addons-606058
	6c58422a4e59c       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   3525b89c3a1c5       kube-controller-manager-addons-606058
	cd0f2d03ad6fc       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   f50212f65a1c7       kube-scheduler-addons-606058
	fb9915a887efb       27e3830e14027       6 minutes ago       Running             etcd                                     0                   c31be0be60953       etcd-addons-606058
	
	
	==> containerd <==
	Aug 28 17:52:06 addons-606058 containerd[819]: time="2024-08-28T17:52:06.431558717Z" level=info msg="RemovePodSandbox \"cf900e2f697ae3ebedac4717c5dbca86fe2b242a0fe1f0ff620d47f59572a985\" returns successfully"
	Aug 28 17:52:53 addons-606058 containerd[819]: time="2024-08-28T17:52:53.344405180Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\""
	Aug 28 17:52:53 addons-606058 containerd[819]: time="2024-08-28T17:52:53.478750167Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 28 17:52:53 addons-606058 containerd[819]: time="2024-08-28T17:52:53.480475206Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: active requests=0, bytes read=89"
	Aug 28 17:52:53 addons-606058 containerd[819]: time="2024-08-28T17:52:53.484766223Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" with image id \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\", size \"69907666\" in 140.311132ms"
	Aug 28 17:52:53 addons-606058 containerd[819]: time="2024-08-28T17:52:53.484819138Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" returns image reference \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\""
	Aug 28 17:52:53 addons-606058 containerd[819]: time="2024-08-28T17:52:53.486886912Z" level=info msg="CreateContainer within sandbox \"86976d3ec8afecbbc1e98fe9d2c398750e70d973b54f51f8eaac3f2afee02b77\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 28 17:52:53 addons-606058 containerd[819]: time="2024-08-28T17:52:53.504647443Z" level=info msg="CreateContainer within sandbox \"86976d3ec8afecbbc1e98fe9d2c398750e70d973b54f51f8eaac3f2afee02b77\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9\""
	Aug 28 17:52:53 addons-606058 containerd[819]: time="2024-08-28T17:52:53.505257008Z" level=info msg="StartContainer for \"1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9\""
	Aug 28 17:52:53 addons-606058 containerd[819]: time="2024-08-28T17:52:53.556492474Z" level=info msg="StartContainer for \"1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9\" returns successfully"
	Aug 28 17:52:54 addons-606058 containerd[819]: time="2024-08-28T17:52:54.811483259Z" level=info msg="shim disconnected" id=1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9 namespace=k8s.io
	Aug 28 17:52:54 addons-606058 containerd[819]: time="2024-08-28T17:52:54.811540564Z" level=warning msg="cleaning up after shim disconnected" id=1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9 namespace=k8s.io
	Aug 28 17:52:54 addons-606058 containerd[819]: time="2024-08-28T17:52:54.811549909Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 28 17:52:55 addons-606058 containerd[819]: time="2024-08-28T17:52:55.462188543Z" level=info msg="RemoveContainer for \"9817895969b971c1619d7099d63b350eed3ea4f1e4e9016dab1f1bdbbd6cd42f\""
	Aug 28 17:52:55 addons-606058 containerd[819]: time="2024-08-28T17:52:55.471837396Z" level=info msg="RemoveContainer for \"9817895969b971c1619d7099d63b350eed3ea4f1e4e9016dab1f1bdbbd6cd42f\" returns successfully"
	Aug 28 17:53:06 addons-606058 containerd[819]: time="2024-08-28T17:53:06.435981524Z" level=info msg="RemoveContainer for \"ffce7a57b8ec9c942389d692e1f0a0f31ba4459537bbc42e9abce7e7b875df35\""
	Aug 28 17:53:06 addons-606058 containerd[819]: time="2024-08-28T17:53:06.442288180Z" level=info msg="RemoveContainer for \"ffce7a57b8ec9c942389d692e1f0a0f31ba4459537bbc42e9abce7e7b875df35\" returns successfully"
	Aug 28 17:53:06 addons-606058 containerd[819]: time="2024-08-28T17:53:06.444258502Z" level=info msg="StopPodSandbox for \"2d98e533993af7306b96b6811022ca5cccc832bc0010fde24a061b23ceb380d0\""
	Aug 28 17:53:06 addons-606058 containerd[819]: time="2024-08-28T17:53:06.454795778Z" level=info msg="TearDown network for sandbox \"2d98e533993af7306b96b6811022ca5cccc832bc0010fde24a061b23ceb380d0\" successfully"
	Aug 28 17:53:06 addons-606058 containerd[819]: time="2024-08-28T17:53:06.454838100Z" level=info msg="StopPodSandbox for \"2d98e533993af7306b96b6811022ca5cccc832bc0010fde24a061b23ceb380d0\" returns successfully"
	Aug 28 17:53:06 addons-606058 containerd[819]: time="2024-08-28T17:53:06.455337814Z" level=info msg="RemovePodSandbox for \"2d98e533993af7306b96b6811022ca5cccc832bc0010fde24a061b23ceb380d0\""
	Aug 28 17:53:06 addons-606058 containerd[819]: time="2024-08-28T17:53:06.455416395Z" level=info msg="Forcibly stopping sandbox \"2d98e533993af7306b96b6811022ca5cccc832bc0010fde24a061b23ceb380d0\""
	Aug 28 17:53:06 addons-606058 containerd[819]: time="2024-08-28T17:53:06.462691793Z" level=info msg="TearDown network for sandbox \"2d98e533993af7306b96b6811022ca5cccc832bc0010fde24a061b23ceb380d0\" successfully"
	Aug 28 17:53:06 addons-606058 containerd[819]: time="2024-08-28T17:53:06.469411912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d98e533993af7306b96b6811022ca5cccc832bc0010fde24a061b23ceb380d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 28 17:53:06 addons-606058 containerd[819]: time="2024-08-28T17:53:06.469526423Z" level=info msg="RemovePodSandbox \"2d98e533993af7306b96b6811022ca5cccc832bc0010fde24a061b23ceb380d0\" returns successfully"
	
	
	==> coredns [9c28faff364142bd5ad9d86145c1839a73ede2b695accffeea35338d5afd70bc] <==
	[INFO] 10.244.0.4:56606 - 45334 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048911s
	[INFO] 10.244.0.4:37732 - 21144 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002274402s
	[INFO] 10.244.0.4:37732 - 26534 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001155581s
	[INFO] 10.244.0.4:54589 - 20399 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000055843s
	[INFO] 10.244.0.4:54589 - 59561 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000043667s
	[INFO] 10.244.0.4:36767 - 36575 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095425s
	[INFO] 10.244.0.4:36767 - 56800 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000047368s
	[INFO] 10.244.0.4:54540 - 9222 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055819s
	[INFO] 10.244.0.4:54540 - 8708 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000036127s
	[INFO] 10.244.0.4:42192 - 32454 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053915s
	[INFO] 10.244.0.4:42192 - 41411 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035569s
	[INFO] 10.244.0.4:58126 - 12217 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001906085s
	[INFO] 10.244.0.4:58126 - 955 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001291637s
	[INFO] 10.244.0.4:41418 - 49235 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046039s
	[INFO] 10.244.0.4:41418 - 34893 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000047458s
	[INFO] 10.244.0.24:50728 - 4415 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.003588932s
	[INFO] 10.244.0.24:43195 - 55516 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004484518s
	[INFO] 10.244.0.24:46019 - 46473 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000198965s
	[INFO] 10.244.0.24:35238 - 58871 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013124s
	[INFO] 10.244.0.24:35587 - 14432 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000209132s
	[INFO] 10.244.0.24:42713 - 5647 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000231039s
	[INFO] 10.244.0.24:38303 - 53723 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002888242s
	[INFO] 10.244.0.24:42621 - 37551 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002140151s
	[INFO] 10.244.0.24:44617 - 7179 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00479367s
	[INFO] 10.244.0.24:43688 - 13225 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002194173s
	
	
	==> describe nodes <==
	Name:               addons-606058
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-606058
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=addons-606058
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T17_49_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-606058
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-606058"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:49:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-606058
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:55:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:52:10 +0000   Wed, 28 Aug 2024 17:48:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:52:10 +0000   Wed, 28 Aug 2024 17:48:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:52:10 +0000   Wed, 28 Aug 2024 17:48:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:52:10 +0000   Wed, 28 Aug 2024 17:49:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-606058
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 dcb09f1e6df4498e9d536e14cadc64f1
	  System UUID:                8a6ebe57-45d3-4c09-96d3-b7fdafbac000
	  Boot ID:                    d0152fd0-4c93-4332-a156-fea49619c341
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-hxrwq     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  gadget                      gadget-snx4z                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-bp25j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-xd7zj    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-6f6b679f8f-p9qlh                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-vj7zr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-606058                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-rs9hc                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-606058                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-606058       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-9d9sc                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-606058                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 metrics-server-84c5f94fbc-6724d             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-gvhjt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-6fb4cdfc84-qgmt4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-mjx8k                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-b85q6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-zlsdt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  local-path-storage          local-path-provisioner-86d989889c-jk6wv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-77d7d48b68-phvl2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-l985r        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-576bc46687-g52qk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-lxv5t              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 6m1s  kube-proxy       
	  Normal   Starting                 6m8s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s  kubelet          Node addons-606058 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s  kubelet          Node addons-606058 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s  kubelet          Node addons-606058 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m4s  node-controller  Node addons-606058 event: Registered Node addons-606058 in Controller
	
	
	==> dmesg <==
	[Aug28 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014863] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.434530] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.056418] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002651] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015530] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004137] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003926] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.610422] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.526921] kauditd_printk_skb: 36 callbacks suppressed
	[Aug28 16:44] hrtimer: interrupt took 13474972 ns
	[Aug28 17:17] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [fb9915a887efb7b8da01657b6e58daac8c4341a5843e284b51f2d588bc87a48b] <==
	{"level":"info","ts":"2024-08-28T17:48:58.709340Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-28T17:48:58.709636Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-28T17:48:58.709661Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-28T17:48:58.709744Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-28T17:48:58.709754Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-28T17:48:59.179432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-28T17:48:59.179670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-28T17:48:59.179761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-28T17:48:59.179919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-28T17:48:59.179998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-28T17:48:59.180098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-28T17:48:59.180176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-28T17:48:59.183585Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-606058 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T17:48:59.183950Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:48:59.184527Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:48:59.184808Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:48:59.186024Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:48:59.187462Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T17:48:59.187494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T17:48:59.188085Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:48:59.188933Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-28T17:48:59.189521Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:48:59.189731Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:48:59.195764Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:48:59.212212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [09499aa13fd50e29ea40f643d7d95a147e0ffed3497e23d49ee3389315e09358] <==
	2024/08/28 17:51:54 GCP Auth Webhook started!
	2024/08/28 17:52:12 Ready to marshal response ...
	2024/08/28 17:52:12 Ready to write response ...
	2024/08/28 17:52:12 Ready to marshal response ...
	2024/08/28 17:52:12 Ready to write response ...
	
	
	==> kernel <==
	 17:55:14 up  1:37,  0 users,  load average: 0.15, 1.12, 2.13
	Linux addons-606058 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3d1909867035f0afbd244afb6e24b11984e37d490bf2f73f58f51c8897b87eb4] <==
	I0828 17:53:05.229647       1 main.go:299] handling current node
	I0828 17:53:15.222719       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:53:15.222762       1 main.go:299] handling current node
	I0828 17:53:25.227532       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:53:25.227571       1 main.go:299] handling current node
	I0828 17:53:35.229881       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:53:35.229915       1 main.go:299] handling current node
	I0828 17:53:45.230278       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:53:45.230341       1 main.go:299] handling current node
	I0828 17:53:55.227924       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:53:55.228011       1 main.go:299] handling current node
	I0828 17:54:05.222268       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:54:05.222367       1 main.go:299] handling current node
	I0828 17:54:15.223088       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:54:15.223132       1 main.go:299] handling current node
	I0828 17:54:25.230198       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:54:25.230234       1 main.go:299] handling current node
	I0828 17:54:35.231094       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:54:35.231132       1 main.go:299] handling current node
	I0828 17:54:45.227663       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:54:45.227708       1 main.go:299] handling current node
	I0828 17:54:55.226652       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:54:55.226694       1 main.go:299] handling current node
	I0828 17:55:05.231450       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0828 17:55:05.231489       1 main.go:299] handling current node
	
	
	==> kube-apiserver [ee7f582b2645be88c0e83a46e504759f0894da8933ec72ee5aec6aca6ae6bb89] <==
	W0828 17:50:26.164210       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:27.182673       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:27.264882       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.205.60:443: connect: connection refused
	E0828 17:50:27.264919       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.205.60:443: connect: connection refused" logger="UnhandledError"
	W0828 17:50:27.266672       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:27.357446       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.205.60:443: connect: connection refused
	E0828 17:50:27.357486       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.205.60:443: connect: connection refused" logger="UnhandledError"
	W0828 17:50:27.359069       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:28.201061       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:29.280390       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:30.288765       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:31.344443       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:32.370258       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:33.440480       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:34.454980       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:35.523933       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:36.553309       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.92.97:443: connect: connection refused
	W0828 17:50:47.182948       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.205.60:443: connect: connection refused
	E0828 17:50:47.182990       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.205.60:443: connect: connection refused" logger="UnhandledError"
	W0828 17:51:27.276006       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.205.60:443: connect: connection refused
	E0828 17:51:27.276054       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.205.60:443: connect: connection refused" logger="UnhandledError"
	W0828 17:51:27.365726       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.205.60:443: connect: connection refused
	E0828 17:51:27.365768       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.205.60:443: connect: connection refused" logger="UnhandledError"
	I0828 17:52:12.244578       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0828 17:52:12.290097       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [6c58422a4e59c01ce6bc793d93c1c04487c86c020c8649fb7bf4bb8f448ae3fc] <==
	I0828 17:51:27.292920       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0828 17:51:27.301663       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0828 17:51:27.314315       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0828 17:51:27.375527       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0828 17:51:27.390322       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0828 17:51:27.390438       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0828 17:51:27.400763       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0828 17:51:28.206228       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0828 17:51:28.218409       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0828 17:51:29.314465       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0828 17:51:29.337906       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0828 17:51:30.322657       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0828 17:51:30.332856       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0828 17:51:30.339827       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0828 17:51:30.350397       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0828 17:51:30.361313       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0828 17:51:30.367464       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0828 17:51:55.318045       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="11.118337ms"
	I0828 17:51:55.318232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="121.427µs"
	I0828 17:52:00.095719       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0828 17:52:00.096213       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0828 17:52:00.312618       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0828 17:52:00.362236       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0828 17:52:10.793727       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-606058"
	I0828 17:52:11.955638       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [3e92639301efb48dca67937c8521361f53deb77ed65f1a19c780c61c12cd2f15] <==
	I0828 17:49:12.702460       1 server_linux.go:66] "Using iptables proxy"
	I0828 17:49:12.812751       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0828 17:49:12.812814       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 17:49:12.870383       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0828 17:49:12.870454       1 server_linux.go:169] "Using iptables Proxier"
	I0828 17:49:12.872850       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 17:49:12.873319       1 server.go:483] "Version info" version="v1.31.0"
	I0828 17:49:12.873334       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:49:12.885942       1 config.go:197] "Starting service config controller"
	I0828 17:49:12.885969       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 17:49:12.885997       1 config.go:104] "Starting endpoint slice config controller"
	I0828 17:49:12.886001       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 17:49:12.886393       1 config.go:326] "Starting node config controller"
	I0828 17:49:12.886401       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 17:49:12.990436       1 shared_informer.go:320] Caches are synced for node config
	I0828 17:49:12.990476       1 shared_informer.go:320] Caches are synced for service config
	I0828 17:49:12.990516       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cd0f2d03ad6fc5d884aaca2f5eb1860c9954f8d215ee2d3c18443a488a71e30e] <==
	W0828 17:49:03.749426       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0828 17:49:03.749443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:49:03.749506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0828 17:49:03.749528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:49:03.749569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 17:49:03.749586       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 17:49:03.749634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0828 17:49:03.749692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:49:03.749772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0828 17:49:03.749824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:49:03.749958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 17:49:03.750003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:49:03.750050       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 17:49:03.750096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:49:04.663094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0828 17:49:04.663316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:49:04.732449       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 17:49:04.732497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:49:04.786290       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0828 17:49:04.786339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:49:04.853184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 17:49:04.853388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:49:05.057010       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 17:49:05.057073       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0828 17:49:06.727796       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 17:53:10 addons-606058 kubelet[1498]: I0828 17:53:10.343631    1498 scope.go:117] "RemoveContainer" containerID="1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9"
	Aug 28 17:53:10 addons-606058 kubelet[1498]: E0828 17:53:10.343819    1498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-snx4z_gadget(dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856)\"" pod="gadget/gadget-snx4z" podUID="dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856"
	Aug 28 17:53:10 addons-606058 kubelet[1498]: I0828 17:53:10.344346    1498 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mjx8k" secret="" err="secret \"gcp-auth\" not found"
	Aug 28 17:53:25 addons-606058 kubelet[1498]: I0828 17:53:25.342786    1498 scope.go:117] "RemoveContainer" containerID="1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9"
	Aug 28 17:53:25 addons-606058 kubelet[1498]: E0828 17:53:25.342996    1498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-snx4z_gadget(dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856)\"" pod="gadget/gadget-snx4z" podUID="dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856"
	Aug 28 17:53:38 addons-606058 kubelet[1498]: I0828 17:53:38.343456    1498 scope.go:117] "RemoveContainer" containerID="1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9"
	Aug 28 17:53:38 addons-606058 kubelet[1498]: E0828 17:53:38.343651    1498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-snx4z_gadget(dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856)\"" pod="gadget/gadget-snx4z" podUID="dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856"
	Aug 28 17:53:53 addons-606058 kubelet[1498]: I0828 17:53:53.343261    1498 scope.go:117] "RemoveContainer" containerID="1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9"
	Aug 28 17:53:53 addons-606058 kubelet[1498]: E0828 17:53:53.343517    1498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-snx4z_gadget(dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856)\"" pod="gadget/gadget-snx4z" podUID="dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856"
	Aug 28 17:54:05 addons-606058 kubelet[1498]: I0828 17:54:05.343114    1498 scope.go:117] "RemoveContainer" containerID="1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9"
	Aug 28 17:54:05 addons-606058 kubelet[1498]: E0828 17:54:05.343338    1498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-snx4z_gadget(dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856)\"" pod="gadget/gadget-snx4z" podUID="dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856"
	Aug 28 17:54:08 addons-606058 kubelet[1498]: I0828 17:54:08.342720    1498 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-qgmt4" secret="" err="secret \"gcp-auth\" not found"
	Aug 28 17:54:19 addons-606058 kubelet[1498]: I0828 17:54:19.343318    1498 scope.go:117] "RemoveContainer" containerID="1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9"
	Aug 28 17:54:19 addons-606058 kubelet[1498]: I0828 17:54:19.343416    1498 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gvhjt" secret="" err="secret \"gcp-auth\" not found"
	Aug 28 17:54:19 addons-606058 kubelet[1498]: E0828 17:54:19.344098    1498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-snx4z_gadget(dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856)\"" pod="gadget/gadget-snx4z" podUID="dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856"
	Aug 28 17:54:22 addons-606058 kubelet[1498]: I0828 17:54:22.342779    1498 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mjx8k" secret="" err="secret \"gcp-auth\" not found"
	Aug 28 17:54:32 addons-606058 kubelet[1498]: I0828 17:54:32.342847    1498 scope.go:117] "RemoveContainer" containerID="1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9"
	Aug 28 17:54:32 addons-606058 kubelet[1498]: E0828 17:54:32.343046    1498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-snx4z_gadget(dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856)\"" pod="gadget/gadget-snx4z" podUID="dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856"
	Aug 28 17:54:45 addons-606058 kubelet[1498]: I0828 17:54:45.343565    1498 scope.go:117] "RemoveContainer" containerID="1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9"
	Aug 28 17:54:45 addons-606058 kubelet[1498]: E0828 17:54:45.343795    1498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-snx4z_gadget(dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856)\"" pod="gadget/gadget-snx4z" podUID="dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856"
	Aug 28 17:54:59 addons-606058 kubelet[1498]: I0828 17:54:59.342651    1498 scope.go:117] "RemoveContainer" containerID="1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9"
	Aug 28 17:54:59 addons-606058 kubelet[1498]: E0828 17:54:59.342881    1498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-snx4z_gadget(dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856)\"" pod="gadget/gadget-snx4z" podUID="dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856"
	Aug 28 17:55:10 addons-606058 kubelet[1498]: I0828 17:55:10.343009    1498 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-qgmt4" secret="" err="secret \"gcp-auth\" not found"
	Aug 28 17:55:14 addons-606058 kubelet[1498]: I0828 17:55:14.343710    1498 scope.go:117] "RemoveContainer" containerID="1faf6e3c3fb5147e4a3929c5de86f49db318bc084563b9459fb665304d7f38f9"
	Aug 28 17:55:14 addons-606058 kubelet[1498]: E0828 17:55:14.344432    1498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-snx4z_gadget(dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856)\"" pod="gadget/gadget-snx4z" podUID="dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856"
	
	
	==> storage-provisioner [d19b76dd6a724b3b3469d59691f93b8b9deccaea97448d535408d2620114907b] <==
	I0828 17:49:17.892362       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 17:49:17.917930       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 17:49:17.917972       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 17:49:17.927093       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 17:49:17.927563       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-606058_58e0ec2f-dd94-42df-9570-ad39defe3c4c!
	I0828 17:49:17.927636       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43b044ef-8291-4ad4-8fb1-401568c39318", APIVersion:"v1", ResourceVersion:"584", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-606058_58e0ec2f-dd94-42df-9570-ad39defe3c4c became leader
	I0828 17:49:18.032465       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-606058_58e0ec2f-dd94-42df-9570-ad39defe3c4c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-606058 -n addons-606058
helpers_test.go:261: (dbg) Run:  kubectl --context addons-606058 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-vvgj8 ingress-nginx-admission-patch-pj42r test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-606058 describe pod ingress-nginx-admission-create-vvgj8 ingress-nginx-admission-patch-pj42r test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-606058 describe pod ingress-nginx-admission-create-vvgj8 ingress-nginx-admission-patch-pj42r test-job-nginx-0: exit status 1 (90.476354ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-vvgj8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pj42r" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-606058 describe pod ingress-nginx-admission-create-vvgj8 ingress-nginx-admission-patch-pj42r test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (383.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-807226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-807226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m18.490661917s)

                                                
                                                
-- stdout --
	* [old-k8s-version-807226] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-807226" primary control-plane node in "old-k8s-version-807226" cluster
	* Pulling base image v0.0.44-1724775115-19521 ...
	* Restarting existing docker container for "old-k8s-version-807226" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-807226 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:37:51.641839  506953 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:37:51.642017  506953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:37:51.642036  506953 out.go:358] Setting ErrFile to fd 2...
	I0828 18:37:51.642042  506953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:37:51.642337  506953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	I0828 18:37:51.642779  506953 out.go:352] Setting JSON to false
	I0828 18:37:51.643988  506953 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8421,"bootTime":1724861851,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0828 18:37:51.644067  506953 start.go:139] virtualization:  
	I0828 18:37:51.648726  506953 out.go:177] * [old-k8s-version-807226] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0828 18:37:51.651236  506953 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:37:51.651333  506953 notify.go:220] Checking for updates...
	I0828 18:37:51.655922  506953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:37:51.659661  506953 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	I0828 18:37:51.661675  506953 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	I0828 18:37:51.663655  506953 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0828 18:37:51.665839  506953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:37:51.668176  506953 config.go:182] Loaded profile config "old-k8s-version-807226": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0828 18:37:51.670715  506953 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 18:37:51.673039  506953 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:37:51.742396  506953 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 18:37:51.742521  506953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 18:37:51.845602  506953 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-28 18:37:51.832811044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 18:37:51.845721  506953 docker.go:307] overlay module found
	I0828 18:37:51.848706  506953 out.go:177] * Using the docker driver based on existing profile
	I0828 18:37:51.850449  506953 start.go:297] selected driver: docker
	I0828 18:37:51.850471  506953 start.go:901] validating driver "docker" against &{Name:old-k8s-version-807226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-807226 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:37:51.850600  506953 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:37:51.851193  506953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 18:37:51.959643  506953 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-28 18:37:51.936633307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 18:37:51.960003  506953 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:37:51.960040  506953 cni.go:84] Creating CNI manager for ""
	I0828 18:37:51.960057  506953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0828 18:37:51.960100  506953 start.go:340] cluster config:
	{Name:old-k8s-version-807226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-807226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:37:51.963415  506953 out.go:177] * Starting "old-k8s-version-807226" primary control-plane node in "old-k8s-version-807226" cluster
	I0828 18:37:51.965163  506953 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0828 18:37:51.966889  506953 out.go:177] * Pulling base image v0.0.44-1724775115-19521 ...
	I0828 18:37:51.968853  506953 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0828 18:37:51.968918  506953 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0828 18:37:51.968931  506953 cache.go:56] Caching tarball of preloaded images
	I0828 18:37:51.969009  506953 preload.go:172] Found /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 18:37:51.969026  506953 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0828 18:37:51.969140  506953 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/config.json ...
	I0828 18:37:51.969356  506953 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	W0828 18:37:51.999143  506953 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce is of wrong architecture
	I0828 18:37:51.999168  506953 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 18:37:51.999240  506953 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0828 18:37:51.999263  506953 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0828 18:37:51.999272  506953 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0828 18:37:51.999280  506953 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0828 18:37:51.999285  506953 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from local cache
	I0828 18:37:52.138779  506953 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from cached tarball
	I0828 18:37:52.138817  506953 cache.go:194] Successfully downloaded all kic artifacts
	I0828 18:37:52.138858  506953 start.go:360] acquireMachinesLock for old-k8s-version-807226: {Name:mk132f90a4ccdd27793d3127364f631218793534 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:37:52.138922  506953 start.go:364] duration metric: took 43.118µs to acquireMachinesLock for "old-k8s-version-807226"
	I0828 18:37:52.138949  506953 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:37:52.138960  506953 fix.go:54] fixHost starting: 
	I0828 18:37:52.139238  506953 cli_runner.go:164] Run: docker container inspect old-k8s-version-807226 --format={{.State.Status}}
	I0828 18:37:52.173437  506953 fix.go:112] recreateIfNeeded on old-k8s-version-807226: state=Stopped err=<nil>
	W0828 18:37:52.173479  506953 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:37:52.176248  506953 out.go:177] * Restarting existing docker container for "old-k8s-version-807226" ...
	I0828 18:37:52.178843  506953 cli_runner.go:164] Run: docker start old-k8s-version-807226
	I0828 18:37:52.631497  506953 cli_runner.go:164] Run: docker container inspect old-k8s-version-807226 --format={{.State.Status}}
	I0828 18:37:52.658519  506953 kic.go:430] container "old-k8s-version-807226" state is running.
	I0828 18:37:52.659840  506953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-807226
	I0828 18:37:52.691789  506953 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/config.json ...
	I0828 18:37:52.692034  506953 machine.go:93] provisionDockerMachine start ...
	I0828 18:37:52.692101  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:52.742468  506953 main.go:141] libmachine: Using SSH client type: native
	I0828 18:37:52.742763  506953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0828 18:37:52.742772  506953 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:37:52.743747  506953 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45708->127.0.0.1:33433: read: connection reset by peer
	I0828 18:37:55.880003  506953 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-807226
	
	I0828 18:37:55.880113  506953 ubuntu.go:169] provisioning hostname "old-k8s-version-807226"
	I0828 18:37:55.880217  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:55.906891  506953 main.go:141] libmachine: Using SSH client type: native
	I0828 18:37:55.907163  506953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0828 18:37:55.907181  506953 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-807226 && echo "old-k8s-version-807226" | sudo tee /etc/hostname
	I0828 18:37:56.072049  506953 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-807226
	
	I0828 18:37:56.072137  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:56.091083  506953 main.go:141] libmachine: Using SSH client type: native
	I0828 18:37:56.091349  506953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0828 18:37:56.091406  506953 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-807226' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-807226/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-807226' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:37:56.231837  506953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:37:56.231912  506953 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19529-294791/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-294791/.minikube}
	I0828 18:37:56.231952  506953 ubuntu.go:177] setting up certificates
	I0828 18:37:56.231994  506953 provision.go:84] configureAuth start
	I0828 18:37:56.232099  506953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-807226
	I0828 18:37:56.251488  506953 provision.go:143] copyHostCerts
	I0828 18:37:56.251557  506953 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-294791/.minikube/ca.pem, removing ...
	I0828 18:37:56.251565  506953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-294791/.minikube/ca.pem
	I0828 18:37:56.251634  506953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-294791/.minikube/ca.pem (1082 bytes)
	I0828 18:37:56.251728  506953 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-294791/.minikube/cert.pem, removing ...
	I0828 18:37:56.251733  506953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-294791/.minikube/cert.pem
	I0828 18:37:56.251757  506953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-294791/.minikube/cert.pem (1123 bytes)
	I0828 18:37:56.251818  506953 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-294791/.minikube/key.pem, removing ...
	I0828 18:37:56.251823  506953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-294791/.minikube/key.pem
	I0828 18:37:56.251845  506953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-294791/.minikube/key.pem (1679 bytes)
	I0828 18:37:56.251894  506953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-294791/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-807226 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-807226]
	I0828 18:37:56.450404  506953 provision.go:177] copyRemoteCerts
	I0828 18:37:56.450652  506953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:37:56.450830  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:56.477962  506953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/old-k8s-version-807226/id_rsa Username:docker}
	I0828 18:37:56.578366  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 18:37:56.604340  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0828 18:37:56.629996  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0828 18:37:56.655486  506953 provision.go:87] duration metric: took 423.461829ms to configureAuth
	I0828 18:37:56.655512  506953 ubuntu.go:193] setting minikube options for container-runtime
	I0828 18:37:56.655713  506953 config.go:182] Loaded profile config "old-k8s-version-807226": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0828 18:37:56.655720  506953 machine.go:96] duration metric: took 3.963678379s to provisionDockerMachine
	I0828 18:37:56.655728  506953 start.go:293] postStartSetup for "old-k8s-version-807226" (driver="docker")
	I0828 18:37:56.655739  506953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:37:56.655788  506953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:37:56.655836  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:56.672984  506953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/old-k8s-version-807226/id_rsa Username:docker}
	I0828 18:37:56.776452  506953 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:37:56.780282  506953 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0828 18:37:56.780320  506953 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0828 18:37:56.780330  506953 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0828 18:37:56.780338  506953 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0828 18:37:56.780347  506953 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-294791/.minikube/addons for local assets ...
	I0828 18:37:56.780405  506953 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-294791/.minikube/files for local assets ...
	I0828 18:37:56.780490  506953 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-294791/.minikube/files/etc/ssl/certs/3001822.pem -> 3001822.pem in /etc/ssl/certs
	I0828 18:37:56.780597  506953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:37:56.788791  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/files/etc/ssl/certs/3001822.pem --> /etc/ssl/certs/3001822.pem (1708 bytes)
	I0828 18:37:56.812703  506953 start.go:296] duration metric: took 156.959775ms for postStartSetup
	I0828 18:37:56.812816  506953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 18:37:56.812879  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:56.843627  506953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/old-k8s-version-807226/id_rsa Username:docker}
	I0828 18:37:56.936985  506953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0828 18:37:56.942149  506953 fix.go:56] duration metric: took 4.803180339s for fixHost
	I0828 18:37:56.942173  506953 start.go:83] releasing machines lock for "old-k8s-version-807226", held for 4.803235469s
	I0828 18:37:56.942249  506953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-807226
	I0828 18:37:56.964744  506953 ssh_runner.go:195] Run: cat /version.json
	I0828 18:37:56.964801  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:56.964830  506953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:37:56.964887  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:56.990973  506953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/old-k8s-version-807226/id_rsa Username:docker}
	I0828 18:37:57.001735  506953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/old-k8s-version-807226/id_rsa Username:docker}
	I0828 18:37:57.100127  506953 ssh_runner.go:195] Run: systemctl --version
	I0828 18:37:57.244894  506953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0828 18:37:57.249433  506953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0828 18:37:57.270678  506953 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0828 18:37:57.270767  506953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:37:57.285996  506953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0828 18:37:57.286022  506953 start.go:495] detecting cgroup driver to use...
	I0828 18:37:57.286056  506953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0828 18:37:57.286114  506953 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0828 18:37:57.305475  506953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 18:37:57.325077  506953 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:37:57.325142  506953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:37:57.342487  506953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:37:57.354947  506953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:37:57.460969  506953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:37:57.580045  506953 docker.go:233] disabling docker service ...
	I0828 18:37:57.580137  506953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:37:57.596719  506953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:37:57.609186  506953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:37:57.767748  506953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:37:57.881780  506953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:37:57.899365  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:37:57.917671  506953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0828 18:37:57.928486  506953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0828 18:37:57.940202  506953 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0828 18:37:57.940345  506953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0828 18:37:57.953313  506953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 18:37:57.962794  506953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0828 18:37:57.973889  506953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 18:37:57.984947  506953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:37:57.995352  506953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0828 18:37:58.008928  506953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:37:58.020642  506953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:37:58.040983  506953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:37:58.153504  506953 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0828 18:37:58.365153  506953 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0828 18:37:58.365252  506953 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0828 18:37:58.369513  506953 start.go:563] Will wait 60s for crictl version
	I0828 18:37:58.369591  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:37:58.373363  506953 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:37:58.430755  506953 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0828 18:37:58.430849  506953 ssh_runner.go:195] Run: containerd --version
	I0828 18:37:58.472434  506953 ssh_runner.go:195] Run: containerd --version
	I0828 18:37:58.505948  506953 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	I0828 18:37:58.507728  506953 cli_runner.go:164] Run: docker network inspect old-k8s-version-807226 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0828 18:37:58.524885  506953 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0828 18:37:58.528908  506953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:37:58.541733  506953 kubeadm.go:883] updating cluster {Name:old-k8s-version-807226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-807226 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:37:58.541853  506953 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0828 18:37:58.541918  506953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:37:58.607046  506953 containerd.go:627] all images are preloaded for containerd runtime.
	I0828 18:37:58.607078  506953 containerd.go:534] Images already preloaded, skipping extraction
	I0828 18:37:58.607138  506953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:37:58.662098  506953 containerd.go:627] all images are preloaded for containerd runtime.
	I0828 18:37:58.662127  506953 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:37:58.662136  506953 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0828 18:37:58.662258  506953 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-807226 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-807226 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:37:58.662326  506953 ssh_runner.go:195] Run: sudo crictl info
	I0828 18:37:58.724285  506953 cni.go:84] Creating CNI manager for ""
	I0828 18:37:58.724307  506953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0828 18:37:58.724317  506953 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:37:58.724336  506953 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-807226 NodeName:old-k8s-version-807226 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0828 18:37:58.724459  506953 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-807226"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:37:58.724524  506953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0828 18:37:58.734493  506953 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:37:58.734669  506953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:37:58.743804  506953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0828 18:37:58.762962  506953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:37:58.783992  506953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0828 18:37:58.809263  506953 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0828 18:37:58.814717  506953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:37:58.828835  506953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:37:58.997616  506953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:37:59.016740  506953 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226 for IP: 192.168.85.2
	I0828 18:37:59.016765  506953 certs.go:194] generating shared ca certs ...
	I0828 18:37:59.016782  506953 certs.go:226] acquiring lock for ca certs: {Name:mke663c906ba93beaf12a5613882d3e46b93d46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:37:59.016952  506953 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-294791/.minikube/ca.key
	I0828 18:37:59.017021  506953 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.key
	I0828 18:37:59.017035  506953 certs.go:256] generating profile certs ...
	I0828 18:37:59.017149  506953 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.key
	I0828 18:37:59.017225  506953 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/apiserver.key.cfee8091
	I0828 18:37:59.017285  506953 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/proxy-client.key
	I0828 18:37:59.017408  506953 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/300182.pem (1338 bytes)
	W0828 18:37:59.017451  506953 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-294791/.minikube/certs/300182_empty.pem, impossibly tiny 0 bytes
	I0828 18:37:59.017464  506953 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:37:59.017501  506953 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem (1082 bytes)
	I0828 18:37:59.017530  506953 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:37:59.017567  506953 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/key.pem (1679 bytes)
	I0828 18:37:59.017630  506953 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/files/etc/ssl/certs/3001822.pem (1708 bytes)
	I0828 18:37:59.018328  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:37:59.061633  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0828 18:37:59.160876  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:37:59.199538  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:37:59.241479  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 18:37:59.265089  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 18:37:59.292256  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:37:59.317005  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 18:37:59.343733  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/certs/300182.pem --> /usr/share/ca-certificates/300182.pem (1338 bytes)
	I0828 18:37:59.372162  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/files/etc/ssl/certs/3001822.pem --> /usr/share/ca-certificates/3001822.pem (1708 bytes)
	I0828 18:37:59.400104  506953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:37:59.425803  506953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:37:59.444712  506953 ssh_runner.go:195] Run: openssl version
	I0828 18:37:59.450369  506953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300182.pem && ln -fs /usr/share/ca-certificates/300182.pem /etc/ssl/certs/300182.pem"
	I0828 18:37:59.459676  506953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300182.pem
	I0828 18:37:59.463144  506953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:59 /usr/share/ca-certificates/300182.pem
	I0828 18:37:59.463232  506953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300182.pem
	I0828 18:37:59.470250  506953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300182.pem /etc/ssl/certs/51391683.0"
	I0828 18:37:59.479205  506953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3001822.pem && ln -fs /usr/share/ca-certificates/3001822.pem /etc/ssl/certs/3001822.pem"
	I0828 18:37:59.489471  506953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3001822.pem
	I0828 18:37:59.493035  506953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:59 /usr/share/ca-certificates/3001822.pem
	I0828 18:37:59.493135  506953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3001822.pem
	I0828 18:37:59.499992  506953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3001822.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:37:59.508881  506953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:37:59.518187  506953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:37:59.521762  506953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:37:59.521829  506953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:37:59.528752  506953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:37:59.540364  506953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:37:59.544403  506953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:37:59.551189  506953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:37:59.558092  506953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:37:59.564816  506953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:37:59.571581  506953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:37:59.578558  506953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:37:59.585743  506953 kubeadm.go:392] StartCluster: {Name:old-k8s-version-807226 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-807226 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:37:59.585848  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0828 18:37:59.586091  506953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:37:59.651054  506953 cri.go:89] found id: "64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1"
	I0828 18:37:59.651079  506953 cri.go:89] found id: "e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b"
	I0828 18:37:59.651085  506953 cri.go:89] found id: "57b5e6b021fd3fa6be0b86391c8620f1e28f3f9176bf158e0e6833d5daaf54ea"
	I0828 18:37:59.651088  506953 cri.go:89] found id: "1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6"
	I0828 18:37:59.651091  506953 cri.go:89] found id: "b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1"
	I0828 18:37:59.651097  506953 cri.go:89] found id: "a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3"
	I0828 18:37:59.651100  506953 cri.go:89] found id: "e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6"
	I0828 18:37:59.651104  506953 cri.go:89] found id: "24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec"
	I0828 18:37:59.651107  506953 cri.go:89] found id: ""
	I0828 18:37:59.651157  506953 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0828 18:37:59.663965  506953 cri.go:116] JSON = null
	W0828 18:37:59.664014  506953 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0828 18:37:59.664087  506953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:37:59.673240  506953 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:37:59.673258  506953 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:37:59.673307  506953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:37:59.681623  506953 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:37:59.682037  506953 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-807226" does not appear in /home/jenkins/minikube-integration/19529-294791/kubeconfig
	I0828 18:37:59.682135  506953 kubeconfig.go:62] /home/jenkins/minikube-integration/19529-294791/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-807226" cluster setting kubeconfig missing "old-k8s-version-807226" context setting]
	I0828 18:37:59.683152  506953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/kubeconfig: {Name:mkdafb119dde5c297a9c0a5213c3687bb184c63e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:37:59.685312  506953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:37:59.709362  506953 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0828 18:37:59.709450  506953 kubeadm.go:597] duration metric: took 36.185668ms to restartPrimaryControlPlane
	I0828 18:37:59.709466  506953 kubeadm.go:394] duration metric: took 123.733076ms to StartCluster
	I0828 18:37:59.709484  506953 settings.go:142] acquiring lock: {Name:mka844fbf5a951ef11587fd548e96fc1d30af8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:37:59.709557  506953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-294791/kubeconfig
	I0828 18:37:59.710246  506953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/kubeconfig: {Name:mkdafb119dde5c297a9c0a5213c3687bb184c63e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:37:59.710478  506953 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0828 18:37:59.710782  506953 config.go:182] Loaded profile config "old-k8s-version-807226": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0828 18:37:59.710831  506953 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:37:59.710906  506953 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-807226"
	I0828 18:37:59.710953  506953 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-807226"
	W0828 18:37:59.710964  506953 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:37:59.710989  506953 host.go:66] Checking if "old-k8s-version-807226" exists ...
	I0828 18:37:59.711505  506953 cli_runner.go:164] Run: docker container inspect old-k8s-version-807226 --format={{.State.Status}}
	I0828 18:37:59.711664  506953 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-807226"
	I0828 18:37:59.711702  506953 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-807226"
	I0828 18:37:59.711928  506953 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-807226"
	I0828 18:37:59.711958  506953 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-807226"
	W0828 18:37:59.711965  506953 addons.go:243] addon metrics-server should already be in state true
	I0828 18:37:59.711990  506953 host.go:66] Checking if "old-k8s-version-807226" exists ...
	I0828 18:37:59.712361  506953 cli_runner.go:164] Run: docker container inspect old-k8s-version-807226 --format={{.State.Status}}
	I0828 18:37:59.712503  506953 cli_runner.go:164] Run: docker container inspect old-k8s-version-807226 --format={{.State.Status}}
	I0828 18:37:59.712870  506953 addons.go:69] Setting dashboard=true in profile "old-k8s-version-807226"
	I0828 18:37:59.712908  506953 addons.go:234] Setting addon dashboard=true in "old-k8s-version-807226"
	W0828 18:37:59.712918  506953 addons.go:243] addon dashboard should already be in state true
	I0828 18:37:59.712946  506953 host.go:66] Checking if "old-k8s-version-807226" exists ...
	I0828 18:37:59.713351  506953 cli_runner.go:164] Run: docker container inspect old-k8s-version-807226 --format={{.State.Status}}
	I0828 18:37:59.715095  506953 out.go:177] * Verifying Kubernetes components...
	I0828 18:37:59.721035  506953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:37:59.771147  506953 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:37:59.771226  506953 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:37:59.775571  506953 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:37:59.775604  506953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:37:59.775673  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:59.776303  506953 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-807226"
	W0828 18:37:59.776324  506953 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:37:59.776349  506953 host.go:66] Checking if "old-k8s-version-807226" exists ...
	I0828 18:37:59.776772  506953 cli_runner.go:164] Run: docker container inspect old-k8s-version-807226 --format={{.State.Status}}
	I0828 18:37:59.777306  506953 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:37:59.777321  506953 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:37:59.777370  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:59.780097  506953 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0828 18:37:59.787194  506953 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0828 18:37:59.795311  506953 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0828 18:37:59.795340  506953 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0828 18:37:59.795490  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:59.828682  506953 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:37:59.828704  506953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:37:59.828773  506953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-807226
	I0828 18:37:59.833607  506953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/old-k8s-version-807226/id_rsa Username:docker}
	I0828 18:37:59.851159  506953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/old-k8s-version-807226/id_rsa Username:docker}
	I0828 18:37:59.860729  506953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/old-k8s-version-807226/id_rsa Username:docker}
	I0828 18:37:59.871518  506953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/old-k8s-version-807226/id_rsa Username:docker}
	I0828 18:37:59.901244  506953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:37:59.925811  506953 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-807226" to be "Ready" ...
	I0828 18:37:59.978465  506953 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:37:59.978485  506953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:38:00.000890  506953 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:38:00.000915  506953 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:38:00.032966  506953 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0828 18:38:00.033067  506953 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0828 18:38:00.040002  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:38:00.058853  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:38:00.059858  506953 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:38:00.059887  506953 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:38:00.097040  506953 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0828 18:38:00.097072  506953 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0828 18:38:00.183608  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:38:00.220274  506953 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0828 18:38:00.220307  506953 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0828 18:38:00.326497  506953 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0828 18:38:00.326530  506953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0828 18:38:00.397493  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0828 18:38:00.397650  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.397666  506953 retry.go:31] will retry after 300.185424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.397761  506953 retry.go:31] will retry after 213.243774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.408022  506953 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0828 18:38:00.408055  506953 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0828 18:38:00.432498  506953 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0828 18:38:00.432533  506953 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0828 18:38:00.451730  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.451766  506953 retry.go:31] will retry after 359.521445ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.459124  506953 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0828 18:38:00.459149  506953 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0828 18:38:00.483290  506953 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0828 18:38:00.483318  506953 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0828 18:38:00.506885  506953 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0828 18:38:00.506913  506953 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0828 18:38:00.528524  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0828 18:38:00.611915  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0828 18:38:00.619361  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.619455  506953 retry.go:31] will retry after 239.345933ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.698500  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0828 18:38:00.712359  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.712432  506953 retry.go:31] will retry after 330.288535ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0828 18:38:00.788213  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.788246  506953 retry.go:31] will retry after 537.577587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.812381  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:38:00.859837  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0828 18:38:00.904629  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.904694  506953 retry.go:31] will retry after 512.132356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0828 18:38:00.943218  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:00.943251  506953 retry.go:31] will retry after 436.964457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.043489  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0828 18:38:01.126744  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.126805  506953 retry.go:31] will retry after 391.389793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.326174  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:38:01.380487  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0828 18:38:01.418087  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.418130  506953 retry.go:31] will retry after 829.212482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.418247  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0828 18:38:01.498841  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.498875  506953 retry.go:31] will retry after 367.793495ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.519192  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0828 18:38:01.526672  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.526712  506953 retry.go:31] will retry after 348.822857ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0828 18:38:01.601198  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.601234  506953 retry.go:31] will retry after 1.011712929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.867816  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0828 18:38:01.876357  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:38:01.927196  506953 node_ready.go:53] error getting node "old-k8s-version-807226": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-807226": dial tcp 192.168.85.2:8443: connect: connection refused
	W0828 18:38:01.982702  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.982736  506953 retry.go:31] will retry after 951.807808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0828 18:38:01.991813  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:01.991843  506953 retry.go:31] will retry after 1.160107563s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:02.248235  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0828 18:38:02.337302  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:02.337336  506953 retry.go:31] will retry after 681.33404ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:02.614055  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0828 18:38:02.711340  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:02.711428  506953 retry.go:31] will retry after 1.573456885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:02.935495  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0828 18:38:03.018868  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:38:03.152420  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0828 18:38:03.172423  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:03.172451  506953 retry.go:31] will retry after 746.254362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0828 18:38:03.172484  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:03.172491  506953 retry.go:31] will retry after 1.570780352s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0828 18:38:03.313311  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:03.313362  506953 retry.go:31] will retry after 1.359068871s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:03.919487  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0828 18:38:03.927293  506953 node_ready.go:53] error getting node "old-k8s-version-807226": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-807226": dial tcp 192.168.85.2:8443: connect: connection refused
	W0828 18:38:04.076462  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:04.076496  506953 retry.go:31] will retry after 1.016929328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:04.286015  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0828 18:38:04.466305  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:04.466335  506953 retry.go:31] will retry after 2.05940421s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:04.672588  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:38:04.743897  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0828 18:38:04.828668  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:04.828702  506953 retry.go:31] will retry after 1.1691235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0828 18:38:04.927951  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:04.927984  506953 retry.go:31] will retry after 2.169002329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:05.094390  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0828 18:38:05.225664  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:05.225699  506953 retry.go:31] will retry after 2.144135916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:05.927358  506953 node_ready.go:53] error getting node "old-k8s-version-807226": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-807226": dial tcp 192.168.85.2:8443: connect: connection refused
	I0828 18:38:05.998542  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0828 18:38:06.093223  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:06.093266  506953 retry.go:31] will retry after 3.969728701s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:06.526007  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0828 18:38:06.597423  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:06.597456  506953 retry.go:31] will retry after 2.32742873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:07.097733  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0828 18:38:07.167690  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:07.167723  506953 retry.go:31] will retry after 2.687263918s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:07.371006  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0828 18:38:07.445161  506953 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:07.445195  506953 retry.go:31] will retry after 4.333405768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0828 18:38:08.426316  506953 node_ready.go:53] error getting node "old-k8s-version-807226": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-807226": dial tcp 192.168.85.2:8443: connect: connection refused
	I0828 18:38:08.925499  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:38:09.855288  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:38:10.063802  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:38:11.779228  506953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0828 18:38:16.323487  506953 node_ready.go:49] node "old-k8s-version-807226" has status "Ready":"True"
	I0828 18:38:16.323512  506953 node_ready.go:38] duration metric: took 16.397661694s for node "old-k8s-version-807226" to be "Ready" ...
	I0828 18:38:16.323525  506953 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:38:16.484748  506953 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-2pk8p" in "kube-system" namespace to be "Ready" ...
	I0828 18:38:16.511798  506953 pod_ready.go:93] pod "coredns-74ff55c5b-2pk8p" in "kube-system" namespace has status "Ready":"True"
	I0828 18:38:16.511870  506953 pod_ready.go:82] duration metric: took 27.034678ms for pod "coredns-74ff55c5b-2pk8p" in "kube-system" namespace to be "Ready" ...
	I0828 18:38:16.511898  506953 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-807226" in "kube-system" namespace to be "Ready" ...
	I0828 18:38:16.544133  506953 pod_ready.go:93] pod "etcd-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"True"
	I0828 18:38:16.544210  506953 pod_ready.go:82] duration metric: took 32.29081ms for pod "etcd-old-k8s-version-807226" in "kube-system" namespace to be "Ready" ...
	I0828 18:38:16.544240  506953 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-807226" in "kube-system" namespace to be "Ready" ...
	I0828 18:38:17.047777  506953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.122239954s)
	I0828 18:38:17.047890  506953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.192569674s)
	I0828 18:38:17.208247  506953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.144397435s)
	I0828 18:38:17.208328  506953 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-807226"
	I0828 18:38:17.403470  506953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.624193559s)
	I0828 18:38:17.405950  506953 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-807226 addons enable metrics-server
	
	I0828 18:38:17.408329  506953 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0828 18:38:17.410050  506953 addons.go:510] duration metric: took 17.699213389s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0828 18:38:18.551152  506953 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:20.552163  506953 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:23.050074  506953 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:25.050717  506953 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:26.050167  506953 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"True"
	I0828 18:38:26.050192  506953 pod_ready.go:82] duration metric: took 9.505915174s for pod "kube-apiserver-old-k8s-version-807226" in "kube-system" namespace to be "Ready" ...
	I0828 18:38:26.050205  506953 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace to be "Ready" ...
	I0828 18:38:28.056331  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:30.080903  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:32.555861  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:34.556449  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:36.557842  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:38.562571  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:41.057273  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:43.078815  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:45.556448  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:47.557482  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:50.083873  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:52.556997  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:54.557547  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:56.558218  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:38:59.057063  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:01.058787  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:03.065105  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:05.557424  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:07.557773  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:10.058350  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:12.558516  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:15.062195  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:17.556690  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:20.063479  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:22.068988  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:24.557199  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:26.557240  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:28.561807  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:31.056648  506953 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:33.057299  506953 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"True"
	I0828 18:39:33.057330  506953 pod_ready.go:82] duration metric: took 1m7.007116091s for pod "kube-controller-manager-old-k8s-version-807226" in "kube-system" namespace to be "Ready" ...
	I0828 18:39:33.057344  506953 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jqkn2" in "kube-system" namespace to be "Ready" ...
	I0828 18:39:33.063304  506953 pod_ready.go:93] pod "kube-proxy-jqkn2" in "kube-system" namespace has status "Ready":"True"
	I0828 18:39:33.063331  506953 pod_ready.go:82] duration metric: took 5.979453ms for pod "kube-proxy-jqkn2" in "kube-system" namespace to be "Ready" ...
	I0828 18:39:33.063342  506953 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-807226" in "kube-system" namespace to be "Ready" ...
	I0828 18:39:35.069216  506953 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:37.573195  506953 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:40.070622  506953 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:42.071072  506953 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:44.569284  506953 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:45.104279  506953 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-807226" in "kube-system" namespace has status "Ready":"True"
	I0828 18:39:45.104322  506953 pod_ready.go:82] duration metric: took 12.040970641s for pod "kube-scheduler-old-k8s-version-807226" in "kube-system" namespace to be "Ready" ...
	I0828 18:39:45.104336  506953 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace to be "Ready" ...
	I0828 18:39:47.112709  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:49.611202  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:51.611294  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:54.110879  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:56.611762  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:39:59.110850  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:01.611520  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:03.611995  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:06.110196  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:08.110790  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:10.112276  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:12.617226  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:15.112550  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:17.611340  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:20.111111  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:22.111257  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:24.611242  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:27.110441  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:29.110632  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:31.611669  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:34.111588  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:36.111860  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:38.611057  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:40.611117  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:43.110002  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:45.134061  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:47.610241  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:49.611506  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:52.110064  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:54.112552  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:56.610443  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:40:58.610818  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:01.110901  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:03.113381  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:05.611016  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:08.111084  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:10.111641  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:12.613292  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:15.112435  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:17.610062  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:19.610166  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:21.610553  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:23.612928  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:26.110283  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:28.110953  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:30.113728  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:32.609702  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:34.610589  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:37.110177  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:39.112781  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:41.611127  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:43.611453  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:46.111742  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:48.610698  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:50.610843  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:53.110079  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:55.111134  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:57.610457  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:41:59.611921  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:01.612397  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:04.110637  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:06.110923  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:08.611284  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:10.612780  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:13.110629  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:15.112789  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:17.610982  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:20.112525  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:22.112731  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:24.619666  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:27.111679  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:29.612073  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:32.110983  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:34.611283  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:37.116493  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:39.611324  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:42.113925  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:44.611189  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:47.110980  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:49.610718  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:52.111783  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:54.610570  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:56.610962  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:42:58.611146  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:00.611451  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:03.110519  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:05.111415  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:07.616059  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:10.111462  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:12.613683  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:15.112010  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:17.112216  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:19.610348  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:22.110816  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:24.112210  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:26.611787  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:29.110609  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:31.110818  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:33.611277  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:35.615069  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:37.619340  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:40.114463  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:42.612229  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:44.612259  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:45.131929  506953 pod_ready.go:82] duration metric: took 4m0.027576667s for pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace to be "Ready" ...
	E0828 18:43:45.132022  506953 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:43:45.132083  506953 pod_ready.go:39] duration metric: took 5m28.808514978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:43:45.132131  506953 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:43:45.132195  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:43:45.132285  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:43:45.253912  506953 cri.go:89] found id: "ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75"
	I0828 18:43:45.254000  506953 cri.go:89] found id: "a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3"
	I0828 18:43:45.254023  506953 cri.go:89] found id: ""
	I0828 18:43:45.254082  506953 logs.go:276] 2 containers: [ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75 a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3]
	I0828 18:43:45.254191  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.265245  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.271205  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0828 18:43:45.271472  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:43:45.352884  506953 cri.go:89] found id: "2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127"
	I0828 18:43:45.352986  506953 cri.go:89] found id: "24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec"
	I0828 18:43:45.353008  506953 cri.go:89] found id: ""
	I0828 18:43:45.353029  506953 logs.go:276] 2 containers: [2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127 24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec]
	I0828 18:43:45.353161  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.358305  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.364298  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0828 18:43:45.364371  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:43:45.425046  506953 cri.go:89] found id: "1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233"
	I0828 18:43:45.425068  506953 cri.go:89] found id: "64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1"
	I0828 18:43:45.425074  506953 cri.go:89] found id: ""
	I0828 18:43:45.425081  506953 logs.go:276] 2 containers: [1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233 64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1]
	I0828 18:43:45.425142  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.430033  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.434279  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:43:45.434409  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:43:45.487807  506953 cri.go:89] found id: "d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4"
	I0828 18:43:45.487881  506953 cri.go:89] found id: "e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6"
	I0828 18:43:45.487903  506953 cri.go:89] found id: ""
	I0828 18:43:45.487932  506953 logs.go:276] 2 containers: [d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4 e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6]
	I0828 18:43:45.488019  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.492443  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.496066  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:43:45.496184  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:43:45.547025  506953 cri.go:89] found id: "39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7"
	I0828 18:43:45.547099  506953 cri.go:89] found id: "1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6"
	I0828 18:43:45.547118  506953 cri.go:89] found id: ""
	I0828 18:43:45.547143  506953 logs.go:276] 2 containers: [39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7 1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6]
	I0828 18:43:45.547235  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.553113  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.557602  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:43:45.557671  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:43:45.669527  506953 cri.go:89] found id: "ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69"
	I0828 18:43:45.669600  506953 cri.go:89] found id: "b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1"
	I0828 18:43:45.669620  506953 cri.go:89] found id: ""
	I0828 18:43:45.669642  506953 logs.go:276] 2 containers: [ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69 b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1]
	I0828 18:43:45.669730  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.673815  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.677575  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0828 18:43:45.677717  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:43:45.727588  506953 cri.go:89] found id: "f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b"
	I0828 18:43:45.727663  506953 cri.go:89] found id: "e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b"
	I0828 18:43:45.727683  506953 cri.go:89] found id: ""
	I0828 18:43:45.727710  506953 logs.go:276] 2 containers: [f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b]
	I0828 18:43:45.727798  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.731960  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.735682  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:43:45.735815  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:43:45.778170  506953 cri.go:89] found id: "3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5"
	I0828 18:43:45.778249  506953 cri.go:89] found id: ""
	I0828 18:43:45.778281  506953 logs.go:276] 1 containers: [3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5]
	I0828 18:43:45.778364  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.782201  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:43:45.782336  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:43:45.871370  506953 cri.go:89] found id: "16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9"
	I0828 18:43:45.871472  506953 cri.go:89] found id: "b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da"
	I0828 18:43:45.871492  506953 cri.go:89] found id: ""
	I0828 18:43:45.871519  506953 logs.go:276] 2 containers: [16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9 b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da]
	I0828 18:43:45.871612  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.880786  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.884382  506953 logs.go:123] Gathering logs for container status ...
	I0828 18:43:45.884456  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:43:45.947952  506953 logs.go:123] Gathering logs for etcd [24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec] ...
	I0828 18:43:45.948029  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec"
	I0828 18:43:46.019815  506953 logs.go:123] Gathering logs for kindnet [e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b] ...
	I0828 18:43:46.019931  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b"
	I0828 18:43:46.081805  506953 logs.go:123] Gathering logs for kube-scheduler [d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4] ...
	I0828 18:43:46.081891  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4"
	I0828 18:43:46.147525  506953 logs.go:123] Gathering logs for kube-scheduler [e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6] ...
	I0828 18:43:46.147601  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6"
	I0828 18:43:46.234768  506953 logs.go:123] Gathering logs for kube-controller-manager [b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1] ...
	I0828 18:43:46.234799  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1"
	I0828 18:43:46.314095  506953 logs.go:123] Gathering logs for kindnet [f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b] ...
	I0828 18:43:46.314134  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b"
	I0828 18:43:46.383950  506953 logs.go:123] Gathering logs for storage-provisioner [16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9] ...
	I0828 18:43:46.383989  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9"
	I0828 18:43:46.441889  506953 logs.go:123] Gathering logs for dmesg ...
	I0828 18:43:46.441926  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:43:46.461855  506953 logs.go:123] Gathering logs for coredns [64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1] ...
	I0828 18:43:46.461884  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1"
	I0828 18:43:46.503471  506953 logs.go:123] Gathering logs for etcd [2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127] ...
	I0828 18:43:46.503499  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127"
	I0828 18:43:46.561827  506953 logs.go:123] Gathering logs for kube-proxy [39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7] ...
	I0828 18:43:46.561858  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7"
	I0828 18:43:46.603506  506953 logs.go:123] Gathering logs for kube-proxy [1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6] ...
	I0828 18:43:46.603580  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6"
	I0828 18:43:46.646422  506953 logs.go:123] Gathering logs for kubernetes-dashboard [3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5] ...
	I0828 18:43:46.646452  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5"
	I0828 18:43:46.694657  506953 logs.go:123] Gathering logs for storage-provisioner [b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da] ...
	I0828 18:43:46.694687  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da"
	I0828 18:43:46.734622  506953 logs.go:123] Gathering logs for containerd ...
	I0828 18:43:46.734659  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0828 18:43:46.796237  506953 logs.go:123] Gathering logs for kubelet ...
	I0828 18:43:46.796277  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 18:43:46.867037  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.295927     660 reflector.go:138] object-"kube-system"/"coredns-token-njr82": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-njr82" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.867264  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296027     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.867499  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296179     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-kglnx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kglnx" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.867714  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296232     660 reflector.go:138] object-"kube-system"/"kindnet-token-hjfcc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-hjfcc" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.867940  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296334     660 reflector.go:138] object-"kube-system"/"metrics-server-token-6hcmf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-6hcmf" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.868172  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296380     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-wcdgz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-wcdgz" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.868381  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296461     660 reflector.go:138] object-"default"/"default-token-j8qlp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-j8qlp" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.868591  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.304570     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.878017  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:20 old-k8s-version-807226 kubelet[660]: E0828 18:38:20.257115     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:46.878227  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:20 old-k8s-version-807226 kubelet[660]: E0828 18:38:20.830731     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.881091  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:31 old-k8s-version-807226 kubelet[660]: E0828 18:38:31.712112     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:46.882779  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:42 old-k8s-version-807226 kubelet[660]: E0828 18:38:42.703684     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.883440  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:43 old-k8s-version-807226 kubelet[660]: E0828 18:38:43.913087     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.883774  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:44 old-k8s-version-807226 kubelet[660]: E0828 18:38:44.913852     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.884556  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:48 old-k8s-version-807226 kubelet[660]: E0828 18:38:48.925862     660 pod_workers.go:191] Error syncing pod 24508be5-83e6-4672-82ce-b943d2db673c ("storage-provisioner_kube-system(24508be5-83e6-4672-82ce-b943d2db673c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(24508be5-83e6-4672-82ce-b943d2db673c)"
	W0828 18:43:46.884884  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:49 old-k8s-version-807226 kubelet[660]: E0828 18:38:49.527740     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.887383  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:55 old-k8s-version-807226 kubelet[660]: E0828 18:38:55.711644     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:46.888497  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:03 old-k8s-version-807226 kubelet[660]: E0828 18:39:03.033822     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.888690  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:08 old-k8s-version-807226 kubelet[660]: E0828 18:39:08.701849     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.889023  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:09 old-k8s-version-807226 kubelet[660]: E0828 18:39:09.528085     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.889652  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:23 old-k8s-version-807226 kubelet[660]: E0828 18:39:23.137950     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.889841  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:23 old-k8s-version-807226 kubelet[660]: E0828 18:39:23.701943     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.890169  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:29 old-k8s-version-807226 kubelet[660]: E0828 18:39:29.527775     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.892795  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:36 old-k8s-version-807226 kubelet[660]: E0828 18:39:36.713928     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:46.893136  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:42 old-k8s-version-807226 kubelet[660]: E0828 18:39:42.701847     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.893328  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:47 old-k8s-version-807226 kubelet[660]: E0828 18:39:47.701990     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.893661  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:56 old-k8s-version-807226 kubelet[660]: E0828 18:39:56.701813     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.893847  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:59 old-k8s-version-807226 kubelet[660]: E0828 18:39:59.703610     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.894439  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:09 old-k8s-version-807226 kubelet[660]: E0828 18:40:09.269052     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.894771  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:10 old-k8s-version-807226 kubelet[660]: E0828 18:40:10.274200     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.894956  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:13 old-k8s-version-807226 kubelet[660]: E0828 18:40:13.701637     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.895289  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:22 old-k8s-version-807226 kubelet[660]: E0828 18:40:22.700967     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.895483  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:28 old-k8s-version-807226 kubelet[660]: E0828 18:40:28.701621     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.895811  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:35 old-k8s-version-807226 kubelet[660]: E0828 18:40:35.701534     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.895996  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:41 old-k8s-version-807226 kubelet[660]: E0828 18:40:41.701505     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.896328  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:46 old-k8s-version-807226 kubelet[660]: E0828 18:40:46.700951     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.896520  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:55 old-k8s-version-807226 kubelet[660]: E0828 18:40:55.701917     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.896871  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:01 old-k8s-version-807226 kubelet[660]: E0828 18:41:01.700919     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.899343  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:08 old-k8s-version-807226 kubelet[660]: E0828 18:41:08.714113     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:46.899753  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:16 old-k8s-version-807226 kubelet[660]: E0828 18:41:16.701046     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.899944  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:23 old-k8s-version-807226 kubelet[660]: E0828 18:41:23.701588     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.900537  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:32 old-k8s-version-807226 kubelet[660]: E0828 18:41:32.512783     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.900743  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:38 old-k8s-version-807226 kubelet[660]: E0828 18:41:38.701192     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.901089  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:39 old-k8s-version-807226 kubelet[660]: E0828 18:41:39.527936     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.901275  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:49 old-k8s-version-807226 kubelet[660]: E0828 18:41:49.703090     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.901609  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:50 old-k8s-version-807226 kubelet[660]: E0828 18:41:50.700970     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.901795  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:00 old-k8s-version-807226 kubelet[660]: E0828 18:42:00.701342     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.902122  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:01 old-k8s-version-807226 kubelet[660]: E0828 18:42:01.700946     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.902307  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:11 old-k8s-version-807226 kubelet[660]: E0828 18:42:11.701847     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.902635  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:13 old-k8s-version-807226 kubelet[660]: E0828 18:42:13.701572     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.902845  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:23 old-k8s-version-807226 kubelet[660]: E0828 18:42:23.704011     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.905145  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:25 old-k8s-version-807226 kubelet[660]: E0828 18:42:25.701382     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.905349  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:34 old-k8s-version-807226 kubelet[660]: E0828 18:42:34.701350     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.905683  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:38 old-k8s-version-807226 kubelet[660]: E0828 18:42:38.701282     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.905871  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:49 old-k8s-version-807226 kubelet[660]: E0828 18:42:49.701288     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.906205  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:49 old-k8s-version-807226 kubelet[660]: E0828 18:42:49.702400     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.906536  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:02 old-k8s-version-807226 kubelet[660]: E0828 18:43:02.701048     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.906722  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:04 old-k8s-version-807226 kubelet[660]: E0828 18:43:04.701421     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.907061  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:13 old-k8s-version-807226 kubelet[660]: E0828 18:43:13.704867     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.907248  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:15 old-k8s-version-807226 kubelet[660]: E0828 18:43:15.701573     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.907593  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:24 old-k8s-version-807226 kubelet[660]: E0828 18:43:24.700926     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.907780  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:27 old-k8s-version-807226 kubelet[660]: E0828 18:43:27.701547     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.908111  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.701771     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.908298  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.702673     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0828 18:43:46.908309  506953 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:43:46.908324  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:43:47.253484  506953 logs.go:123] Gathering logs for coredns [1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233] ...
	I0828 18:43:47.253521  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233"
	I0828 18:43:47.290727  506953 logs.go:123] Gathering logs for kube-controller-manager [ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69] ...
	I0828 18:43:47.290767  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69"
	I0828 18:43:47.365736  506953 logs.go:123] Gathering logs for kube-apiserver [ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75] ...
	I0828 18:43:47.365773  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75"
	I0828 18:43:47.448039  506953 logs.go:123] Gathering logs for kube-apiserver [a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3] ...
	I0828 18:43:47.448074  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3"
	I0828 18:43:47.518932  506953 out.go:358] Setting ErrFile to fd 2...
	I0828 18:43:47.518964  506953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 18:43:47.519016  506953 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 18:43:47.519030  506953 out.go:270]   Aug 28 18:43:15 old-k8s-version-807226 kubelet[660]: E0828 18:43:15.701573     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 28 18:43:15 old-k8s-version-807226 kubelet[660]: E0828 18:43:15.701573     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:47.519037  506953 out.go:270]   Aug 28 18:43:24 old-k8s-version-807226 kubelet[660]: E0828 18:43:24.700926     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	  Aug 28 18:43:24 old-k8s-version-807226 kubelet[660]: E0828 18:43:24.700926     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:47.519048  506953 out.go:270]   Aug 28 18:43:27 old-k8s-version-807226 kubelet[660]: E0828 18:43:27.701547     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 28 18:43:27 old-k8s-version-807226 kubelet[660]: E0828 18:43:27.701547     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:47.519054  506953 out.go:270]   Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.701771     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	  Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.701771     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:47.519065  506953 out.go:270]   Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.702673     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.702673     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0828 18:43:47.519088  506953 out.go:358] Setting ErrFile to fd 2...
	I0828 18:43:47.519093  506953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:43:57.519796  506953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:57.536692  506953 api_server.go:72] duration metric: took 5m57.826170676s to wait for apiserver process to appear ...
	I0828 18:43:57.536720  506953 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:43:57.536758  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:43:57.536817  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:43:57.595661  506953 cri.go:89] found id: "ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75"
	I0828 18:43:57.595682  506953 cri.go:89] found id: "a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3"
	I0828 18:43:57.595687  506953 cri.go:89] found id: ""
	I0828 18:43:57.595694  506953 logs.go:276] 2 containers: [ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75 a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3]
	I0828 18:43:57.595754  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.599641  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.605878  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0828 18:43:57.605943  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:43:57.663987  506953 cri.go:89] found id: "2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127"
	I0828 18:43:57.664005  506953 cri.go:89] found id: "24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec"
	I0828 18:43:57.664010  506953 cri.go:89] found id: ""
	I0828 18:43:57.664017  506953 logs.go:276] 2 containers: [2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127 24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec]
	I0828 18:43:57.664076  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.668859  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.672658  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0828 18:43:57.672723  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:43:57.748334  506953 cri.go:89] found id: "1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233"
	I0828 18:43:57.748354  506953 cri.go:89] found id: "64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1"
	I0828 18:43:57.748358  506953 cri.go:89] found id: ""
	I0828 18:43:57.748365  506953 logs.go:276] 2 containers: [1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233 64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1]
	I0828 18:43:57.748419  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.752475  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.756288  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:43:57.756354  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:43:57.805243  506953 cri.go:89] found id: "d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4"
	I0828 18:43:57.805262  506953 cri.go:89] found id: "e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6"
	I0828 18:43:57.805267  506953 cri.go:89] found id: ""
	I0828 18:43:57.805274  506953 logs.go:276] 2 containers: [d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4 e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6]
	I0828 18:43:57.805328  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.809042  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.813760  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:43:57.813826  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:43:57.878594  506953 cri.go:89] found id: "39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7"
	I0828 18:43:57.878669  506953 cri.go:89] found id: "1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6"
	I0828 18:43:57.878689  506953 cri.go:89] found id: ""
	I0828 18:43:57.878716  506953 logs.go:276] 2 containers: [39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7 1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6]
	I0828 18:43:57.878795  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.883014  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.890506  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:43:57.890624  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:43:57.972627  506953 cri.go:89] found id: "ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69"
	I0828 18:43:57.972698  506953 cri.go:89] found id: "b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1"
	I0828 18:43:57.972717  506953 cri.go:89] found id: ""
	I0828 18:43:57.972742  506953 logs.go:276] 2 containers: [ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69 b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1]
	I0828 18:43:57.972820  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.978344  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.982006  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0828 18:43:57.982108  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:43:58.032507  506953 cri.go:89] found id: "f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b"
	I0828 18:43:58.032586  506953 cri.go:89] found id: "e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b"
	I0828 18:43:58.032608  506953 cri.go:89] found id: ""
	I0828 18:43:58.032636  506953 logs.go:276] 2 containers: [f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b]
	I0828 18:43:58.032717  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:58.037284  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:58.041323  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:43:58.041439  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:43:58.100965  506953 cri.go:89] found id: "16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9"
	I0828 18:43:58.101034  506953 cri.go:89] found id: "b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da"
	I0828 18:43:58.101059  506953 cri.go:89] found id: ""
	I0828 18:43:58.101086  506953 logs.go:276] 2 containers: [16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9 b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da]
	I0828 18:43:58.101161  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:58.104945  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:58.108551  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:43:58.108643  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:43:58.202118  506953 cri.go:89] found id: "3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5"
	I0828 18:43:58.202187  506953 cri.go:89] found id: ""
	I0828 18:43:58.202209  506953 logs.go:276] 1 containers: [3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5]
	I0828 18:43:58.202282  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:58.206252  506953 logs.go:123] Gathering logs for storage-provisioner [16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9] ...
	I0828 18:43:58.206311  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9"
	I0828 18:43:58.270553  506953 logs.go:123] Gathering logs for storage-provisioner [b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da] ...
	I0828 18:43:58.270627  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da"
	I0828 18:43:58.323427  506953 logs.go:123] Gathering logs for kubelet ...
	I0828 18:43:58.323499  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 18:43:58.379618  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.295927     660 reflector.go:138] object-"kube-system"/"coredns-token-njr82": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-njr82" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.379877  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296027     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.380114  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296179     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-kglnx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kglnx" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.380389  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296232     660 reflector.go:138] object-"kube-system"/"kindnet-token-hjfcc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-hjfcc" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.380643  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296334     660 reflector.go:138] object-"kube-system"/"metrics-server-token-6hcmf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-6hcmf" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.380890  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296380     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-wcdgz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-wcdgz" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.381117  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296461     660 reflector.go:138] object-"default"/"default-token-j8qlp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-j8qlp" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.381343  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.304570     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.390666  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:20 old-k8s-version-807226 kubelet[660]: E0828 18:38:20.257115     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:58.390891  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:20 old-k8s-version-807226 kubelet[660]: E0828 18:38:20.830731     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.393710  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:31 old-k8s-version-807226 kubelet[660]: E0828 18:38:31.712112     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:58.395431  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:42 old-k8s-version-807226 kubelet[660]: E0828 18:38:42.703684     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.396041  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:43 old-k8s-version-807226 kubelet[660]: E0828 18:38:43.913087     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.396398  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:44 old-k8s-version-807226 kubelet[660]: E0828 18:38:44.913852     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.397187  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:48 old-k8s-version-807226 kubelet[660]: E0828 18:38:48.925862     660 pod_workers.go:191] Error syncing pod 24508be5-83e6-4672-82ce-b943d2db673c ("storage-provisioner_kube-system(24508be5-83e6-4672-82ce-b943d2db673c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(24508be5-83e6-4672-82ce-b943d2db673c)"
	W0828 18:43:58.397532  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:49 old-k8s-version-807226 kubelet[660]: E0828 18:38:49.527740     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.400075  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:55 old-k8s-version-807226 kubelet[660]: E0828 18:38:55.711644     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:58.401166  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:03 old-k8s-version-807226 kubelet[660]: E0828 18:39:03.033822     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.401370  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:08 old-k8s-version-807226 kubelet[660]: E0828 18:39:08.701849     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.401718  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:09 old-k8s-version-807226 kubelet[660]: E0828 18:39:09.528085     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.402331  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:23 old-k8s-version-807226 kubelet[660]: E0828 18:39:23.137950     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.402533  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:23 old-k8s-version-807226 kubelet[660]: E0828 18:39:23.701943     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.402874  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:29 old-k8s-version-807226 kubelet[660]: E0828 18:39:29.527775     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.405316  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:36 old-k8s-version-807226 kubelet[660]: E0828 18:39:36.713928     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:58.405660  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:42 old-k8s-version-807226 kubelet[660]: E0828 18:39:42.701847     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.405864  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:47 old-k8s-version-807226 kubelet[660]: E0828 18:39:47.701990     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.406221  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:56 old-k8s-version-807226 kubelet[660]: E0828 18:39:56.701813     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.406424  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:59 old-k8s-version-807226 kubelet[660]: E0828 18:39:59.703610     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.407057  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:09 old-k8s-version-807226 kubelet[660]: E0828 18:40:09.269052     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.407409  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:10 old-k8s-version-807226 kubelet[660]: E0828 18:40:10.274200     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.407613  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:13 old-k8s-version-807226 kubelet[660]: E0828 18:40:13.701637     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.407960  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:22 old-k8s-version-807226 kubelet[660]: E0828 18:40:22.700967     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.408160  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:28 old-k8s-version-807226 kubelet[660]: E0828 18:40:28.701621     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.408507  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:35 old-k8s-version-807226 kubelet[660]: E0828 18:40:35.701534     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.408709  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:41 old-k8s-version-807226 kubelet[660]: E0828 18:40:41.701505     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.411739  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:46 old-k8s-version-807226 kubelet[660]: E0828 18:40:46.700951     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.411947  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:55 old-k8s-version-807226 kubelet[660]: E0828 18:40:55.701917     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.412295  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:01 old-k8s-version-807226 kubelet[660]: E0828 18:41:01.700919     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.414735  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:08 old-k8s-version-807226 kubelet[660]: E0828 18:41:08.714113     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:58.415082  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:16 old-k8s-version-807226 kubelet[660]: E0828 18:41:16.701046     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.415285  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:23 old-k8s-version-807226 kubelet[660]: E0828 18:41:23.701588     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.415901  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:32 old-k8s-version-807226 kubelet[660]: E0828 18:41:32.512783     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.416107  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:38 old-k8s-version-807226 kubelet[660]: E0828 18:41:38.701192     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.416452  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:39 old-k8s-version-807226 kubelet[660]: E0828 18:41:39.527936     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.416664  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:49 old-k8s-version-807226 kubelet[660]: E0828 18:41:49.703090     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.417006  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:50 old-k8s-version-807226 kubelet[660]: E0828 18:41:50.700970     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.417210  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:00 old-k8s-version-807226 kubelet[660]: E0828 18:42:00.701342     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.417552  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:01 old-k8s-version-807226 kubelet[660]: E0828 18:42:01.700946     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.417752  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:11 old-k8s-version-807226 kubelet[660]: E0828 18:42:11.701847     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.418094  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:13 old-k8s-version-807226 kubelet[660]: E0828 18:42:13.701572     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.418297  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:23 old-k8s-version-807226 kubelet[660]: E0828 18:42:23.704011     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.418644  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:25 old-k8s-version-807226 kubelet[660]: E0828 18:42:25.701382     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.418865  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:34 old-k8s-version-807226 kubelet[660]: E0828 18:42:34.701350     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.419208  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:38 old-k8s-version-807226 kubelet[660]: E0828 18:42:38.701282     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.419424  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:49 old-k8s-version-807226 kubelet[660]: E0828 18:42:49.701288     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.419767  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:49 old-k8s-version-807226 kubelet[660]: E0828 18:42:49.702400     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.420117  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:02 old-k8s-version-807226 kubelet[660]: E0828 18:43:02.701048     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.420323  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:04 old-k8s-version-807226 kubelet[660]: E0828 18:43:04.701421     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.420686  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:13 old-k8s-version-807226 kubelet[660]: E0828 18:43:13.704867     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.420901  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:15 old-k8s-version-807226 kubelet[660]: E0828 18:43:15.701573     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.421243  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:24 old-k8s-version-807226 kubelet[660]: E0828 18:43:24.700926     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.421448  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:27 old-k8s-version-807226 kubelet[660]: E0828 18:43:27.701547     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.421790  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.701771     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.421993  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.702673     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.422337  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:50 old-k8s-version-807226 kubelet[660]: E0828 18:43:50.701089     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.424806  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:52 old-k8s-version-807226 kubelet[660]: E0828 18:43:52.717096     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I0828 18:43:58.424841  506953 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:43:58.424869  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:43:58.606810  506953 logs.go:123] Gathering logs for kube-apiserver [ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75] ...
	I0828 18:43:58.606883  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75"
	I0828 18:43:58.721991  506953 logs.go:123] Gathering logs for etcd [24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec] ...
	I0828 18:43:58.722067  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec"
	I0828 18:43:58.802082  506953 logs.go:123] Gathering logs for coredns [64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1] ...
	I0828 18:43:58.802157  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1"
	I0828 18:43:58.887966  506953 logs.go:123] Gathering logs for etcd [2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127] ...
	I0828 18:43:58.887996  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127"
	I0828 18:43:58.956840  506953 logs.go:123] Gathering logs for kube-proxy [1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6] ...
	I0828 18:43:58.956921  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6"
	I0828 18:43:59.038966  506953 logs.go:123] Gathering logs for container status ...
	I0828 18:43:59.039045  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:43:59.189856  506953 logs.go:123] Gathering logs for dmesg ...
	I0828 18:43:59.189887  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:43:59.206061  506953 logs.go:123] Gathering logs for coredns [1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233] ...
	I0828 18:43:59.206131  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233"
	I0828 18:43:59.261030  506953 logs.go:123] Gathering logs for kube-scheduler [e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6] ...
	I0828 18:43:59.261110  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6"
	I0828 18:43:59.311805  506953 logs.go:123] Gathering logs for kindnet [e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b] ...
	I0828 18:43:59.311877  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b"
	I0828 18:43:59.364046  506953 logs.go:123] Gathering logs for kubernetes-dashboard [3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5] ...
	I0828 18:43:59.364075  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5"
	I0828 18:43:59.411769  506953 logs.go:123] Gathering logs for kindnet [f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b] ...
	I0828 18:43:59.411800  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b"
	I0828 18:43:59.494157  506953 logs.go:123] Gathering logs for containerd ...
	I0828 18:43:59.494189  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0828 18:43:59.563919  506953 logs.go:123] Gathering logs for kube-apiserver [a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3] ...
	I0828 18:43:59.563956  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3"
	I0828 18:43:59.630887  506953 logs.go:123] Gathering logs for kube-scheduler [d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4] ...
	I0828 18:43:59.630923  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4"
	I0828 18:43:59.734344  506953 logs.go:123] Gathering logs for kube-proxy [39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7] ...
	I0828 18:43:59.734376  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7"
	I0828 18:43:59.795673  506953 logs.go:123] Gathering logs for kube-controller-manager [ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69] ...
	I0828 18:43:59.795702  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69"
	I0828 18:43:59.895079  506953 logs.go:123] Gathering logs for kube-controller-manager [b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1] ...
	I0828 18:43:59.895118  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1"
	I0828 18:43:59.989951  506953 out.go:358] Setting ErrFile to fd 2...
	I0828 18:43:59.989985  506953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 18:43:59.990051  506953 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 18:43:59.990065  506953 out.go:270]   Aug 28 18:43:27 old-k8s-version-807226 kubelet[660]: E0828 18:43:27.701547     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 28 18:43:27 old-k8s-version-807226 kubelet[660]: E0828 18:43:27.701547     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:59.990077  506953 out.go:270]   Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.701771     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	  Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.701771     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:59.990090  506953 out.go:270]   Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.702673     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.702673     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:59.990095  506953 out.go:270]   Aug 28 18:43:50 old-k8s-version-807226 kubelet[660]: E0828 18:43:50.701089     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	  Aug 28 18:43:50 old-k8s-version-807226 kubelet[660]: E0828 18:43:50.701089     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:59.990222  506953 out.go:270]   Aug 28 18:43:52 old-k8s-version-807226 kubelet[660]: E0828 18:43:52.717096     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	  Aug 28 18:43:52 old-k8s-version-807226 kubelet[660]: E0828 18:43:52.717096     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I0828 18:43:59.990246  506953 out.go:358] Setting ErrFile to fd 2...
	I0828 18:43:59.990256  506953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:44:09.991483  506953 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0828 18:44:10.007586  506953 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0828 18:44:10.010464  506953 out.go:201] 
	W0828 18:44:10.013284  506953 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0828 18:44:10.013326  506953 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0828 18:44:10.013354  506953 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0828 18:44:10.013361  506953 out.go:270] * 
	* 
	W0828 18:44:10.014310  506953 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:44:10.017397  506953 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-807226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-807226
helpers_test.go:235: (dbg) docker inspect old-k8s-version-807226:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "79a5a4a9542d797347c95ea09d92f7b8aa51701b8c39a5b70105972a32caf71b",
	        "Created": "2024-08-28T18:34:58.087427057Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 507154,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-28T18:37:52.384384559Z",
	            "FinishedAt": "2024-08-28T18:37:50.828878594Z"
	        },
	        "Image": "sha256:2cc8dc59c2b679153d99f84cc70dab3e87225f8a0d04f61969b54714a9c4cd4d",
	        "ResolvConfPath": "/var/lib/docker/containers/79a5a4a9542d797347c95ea09d92f7b8aa51701b8c39a5b70105972a32caf71b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/79a5a4a9542d797347c95ea09d92f7b8aa51701b8c39a5b70105972a32caf71b/hostname",
	        "HostsPath": "/var/lib/docker/containers/79a5a4a9542d797347c95ea09d92f7b8aa51701b8c39a5b70105972a32caf71b/hosts",
	        "LogPath": "/var/lib/docker/containers/79a5a4a9542d797347c95ea09d92f7b8aa51701b8c39a5b70105972a32caf71b/79a5a4a9542d797347c95ea09d92f7b8aa51701b8c39a5b70105972a32caf71b-json.log",
	        "Name": "/old-k8s-version-807226",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-807226:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-807226",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/db1eb8437fc9ebb4b6e8238155e02eda81648f444ba2407ce91315de27ac7cd8-init/diff:/var/lib/docker/overlay2/68d9a87ad0f678e89d4bd37593e54708aeddbc1992258326f1e13c1ad826f200/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db1eb8437fc9ebb4b6e8238155e02eda81648f444ba2407ce91315de27ac7cd8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db1eb8437fc9ebb4b6e8238155e02eda81648f444ba2407ce91315de27ac7cd8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db1eb8437fc9ebb4b6e8238155e02eda81648f444ba2407ce91315de27ac7cd8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-807226",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-807226/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-807226",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-807226",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-807226",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "adc1451069364dedd20089a4eb566f8db8e1e804cc9f612cf73d06e92cb1f8c8",
	            "SandboxKey": "/var/run/docker/netns/adc145106936",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-807226": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "fe721f2a46a6e43e7160fec2aa00ff6e02ed2af161bef211b40d3d8dde6ac319",
	                    "EndpointID": "1d694c98cdfb82b01d44f8fd54db887b9740da84dc3ca1e2011a510da8e94db5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-807226",
	                        "79a5a4a9542d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-807226 -n old-k8s-version-807226
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-807226 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-807226 logs -n 25: (3.10890949s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-985715                              | cert-expiration-985715       | jenkins | v1.33.1 | 28 Aug 24 18:33 UTC | 28 Aug 24 18:34 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | force-systemd-env-393848                               | force-systemd-env-393848     | jenkins | v1.33.1 | 28 Aug 24 18:34 UTC | 28 Aug 24 18:34 UTC |
	|         | ssh cat                                                |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-393848                            | force-systemd-env-393848     | jenkins | v1.33.1 | 28 Aug 24 18:34 UTC | 28 Aug 24 18:34 UTC |
	| start   | -p cert-options-173362                                 | cert-options-173362          | jenkins | v1.33.1 | 28 Aug 24 18:34 UTC | 28 Aug 24 18:34 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | cert-options-173362 ssh                                | cert-options-173362          | jenkins | v1.33.1 | 28 Aug 24 18:34 UTC | 28 Aug 24 18:34 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-173362 -- sudo                         | cert-options-173362          | jenkins | v1.33.1 | 28 Aug 24 18:34 UTC | 28 Aug 24 18:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-173362                                 | cert-options-173362          | jenkins | v1.33.1 | 28 Aug 24 18:34 UTC | 28 Aug 24 18:34 UTC |
	| start   | -p old-k8s-version-807226                              | old-k8s-version-807226       | jenkins | v1.33.1 | 28 Aug 24 18:34 UTC | 28 Aug 24 18:37 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-985715                              | cert-expiration-985715       | jenkins | v1.33.1 | 28 Aug 24 18:37 UTC | 28 Aug 24 18:37 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-985715                              | cert-expiration-985715       | jenkins | v1.33.1 | 28 Aug 24 18:37 UTC | 28 Aug 24 18:37 UTC |
	| start   | -p                                                     | default-k8s-diff-port-940663 | jenkins | v1.33.1 | 28 Aug 24 18:37 UTC | 28 Aug 24 18:38 UTC |
	|         | default-k8s-diff-port-940663                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-807226        | old-k8s-version-807226       | jenkins | v1.33.1 | 28 Aug 24 18:37 UTC | 28 Aug 24 18:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-807226                              | old-k8s-version-807226       | jenkins | v1.33.1 | 28 Aug 24 18:37 UTC | 28 Aug 24 18:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-807226             | old-k8s-version-807226       | jenkins | v1.33.1 | 28 Aug 24 18:37 UTC | 28 Aug 24 18:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-807226                              | old-k8s-version-807226       | jenkins | v1.33.1 | 28 Aug 24 18:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-940663  | default-k8s-diff-port-940663 | jenkins | v1.33.1 | 28 Aug 24 18:38 UTC | 28 Aug 24 18:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-940663 | jenkins | v1.33.1 | 28 Aug 24 18:38 UTC | 28 Aug 24 18:38 UTC |
	|         | default-k8s-diff-port-940663                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-940663       | default-k8s-diff-port-940663 | jenkins | v1.33.1 | 28 Aug 24 18:38 UTC | 28 Aug 24 18:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-940663 | jenkins | v1.33.1 | 28 Aug 24 18:38 UTC | 28 Aug 24 18:43 UTC |
	|         | default-k8s-diff-port-940663                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-940663                           | default-k8s-diff-port-940663 | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC | 28 Aug 24 18:43 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-940663 | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC | 28 Aug 24 18:43 UTC |
	|         | default-k8s-diff-port-940663                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-940663 | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC | 28 Aug 24 18:43 UTC |
	|         | default-k8s-diff-port-940663                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-940663 | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC | 28 Aug 24 18:43 UTC |
	|         | default-k8s-diff-port-940663                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-940663 | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC | 28 Aug 24 18:43 UTC |
	|         | default-k8s-diff-port-940663                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-014747                                  | embed-certs-014747           | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 18:43:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 18:43:42.073875  517188 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:43:42.074090  517188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:43:42.074117  517188 out.go:358] Setting ErrFile to fd 2...
	I0828 18:43:42.074138  517188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:43:42.074420  517188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	I0828 18:43:42.074943  517188 out.go:352] Setting JSON to false
	I0828 18:43:42.076190  517188 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8771,"bootTime":1724861851,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0828 18:43:42.076307  517188 start.go:139] virtualization:  
	I0828 18:43:42.079090  517188 out.go:177] * [embed-certs-014747] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0828 18:43:42.081404  517188 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:43:42.081562  517188 notify.go:220] Checking for updates...
	I0828 18:43:42.086004  517188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:43:42.088461  517188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	I0828 18:43:42.098867  517188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	I0828 18:43:42.101122  517188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0828 18:43:42.103297  517188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:43:42.108850  517188 config.go:182] Loaded profile config "old-k8s-version-807226": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0828 18:43:42.108961  517188 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:43:42.153193  517188 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 18:43:42.153332  517188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 18:43:42.222379  517188 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-28 18:43:42.210427075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 18:43:42.222496  517188 docker.go:307] overlay module found
	I0828 18:43:42.224598  517188 out.go:177] * Using the docker driver based on user configuration
	I0828 18:43:42.226345  517188 start.go:297] selected driver: docker
	I0828 18:43:42.226370  517188 start.go:901] validating driver "docker" against <nil>
	I0828 18:43:42.226389  517188 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:43:42.227125  517188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 18:43:42.288503  517188 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-28 18:43:42.278085632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 18:43:42.288711  517188 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 18:43:42.288956  517188 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:43:42.290923  517188 out.go:177] * Using Docker driver with root privileges
	I0828 18:43:42.292759  517188 cni.go:84] Creating CNI manager for ""
	I0828 18:43:42.292791  517188 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0828 18:43:42.292805  517188 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0828 18:43:42.292908  517188 start.go:340] cluster config:
	{Name:embed-certs-014747 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-014747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:43:42.295636  517188 out.go:177] * Starting "embed-certs-014747" primary control-plane node in "embed-certs-014747" cluster
	I0828 18:43:42.297483  517188 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0828 18:43:42.299190  517188 out.go:177] * Pulling base image v0.0.44-1724775115-19521 ...
	I0828 18:43:42.300915  517188 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0828 18:43:42.300871  517188 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0828 18:43:42.301073  517188 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0828 18:43:42.301086  517188 cache.go:56] Caching tarball of preloaded images
	I0828 18:43:42.301186  517188 preload.go:172] Found /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 18:43:42.301200  517188 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0828 18:43:42.301357  517188 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/config.json ...
	I0828 18:43:42.301390  517188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/config.json: {Name:mk34c1671691578bd14c26eb751dc19b59e3c0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0828 18:43:42.321326  517188 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce is of wrong architecture
	I0828 18:43:42.321362  517188 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 18:43:42.321514  517188 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0828 18:43:42.321546  517188 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0828 18:43:42.321562  517188 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0828 18:43:42.321574  517188 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0828 18:43:42.321579  517188 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from local cache
	I0828 18:43:42.444981  517188 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from cached tarball
	I0828 18:43:42.445019  517188 cache.go:194] Successfully downloaded all kic artifacts
	I0828 18:43:42.445067  517188 start.go:360] acquireMachinesLock for embed-certs-014747: {Name:mk4ef7f940ea9cea1e93a44bd61876ceb69183a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:43:42.445657  517188 start.go:364] duration metric: took 564.642µs to acquireMachinesLock for "embed-certs-014747"
	I0828 18:43:42.445699  517188 start.go:93] Provisioning new machine with config: &{Name:embed-certs-014747 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-014747 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0828 18:43:42.445787  517188 start.go:125] createHost starting for "" (driver="docker")
	I0828 18:43:42.612229  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:44.612259  506953 pod_ready.go:103] pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace has status "Ready":"False"
	I0828 18:43:45.131929  506953 pod_ready.go:82] duration metric: took 4m0.027576667s for pod "metrics-server-9975d5f86-6vl9g" in "kube-system" namespace to be "Ready" ...
	E0828 18:43:45.132022  506953 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:43:45.132083  506953 pod_ready.go:39] duration metric: took 5m28.808514978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:43:45.132131  506953 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:43:45.132195  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:43:45.132285  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:43:45.253912  506953 cri.go:89] found id: "ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75"
	I0828 18:43:45.254000  506953 cri.go:89] found id: "a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3"
	I0828 18:43:45.254023  506953 cri.go:89] found id: ""
	I0828 18:43:45.254082  506953 logs.go:276] 2 containers: [ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75 a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3]
	I0828 18:43:45.254191  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.265245  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.271205  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0828 18:43:45.271472  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:43:45.352884  506953 cri.go:89] found id: "2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127"
	I0828 18:43:45.352986  506953 cri.go:89] found id: "24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec"
	I0828 18:43:45.353008  506953 cri.go:89] found id: ""
	I0828 18:43:45.353029  506953 logs.go:276] 2 containers: [2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127 24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec]
	I0828 18:43:45.353161  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.358305  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.364298  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0828 18:43:45.364371  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:43:45.425046  506953 cri.go:89] found id: "1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233"
	I0828 18:43:45.425068  506953 cri.go:89] found id: "64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1"
	I0828 18:43:45.425074  506953 cri.go:89] found id: ""
	I0828 18:43:45.425081  506953 logs.go:276] 2 containers: [1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233 64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1]
	I0828 18:43:45.425142  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.430033  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.434279  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:43:45.434409  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:43:45.487807  506953 cri.go:89] found id: "d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4"
	I0828 18:43:45.487881  506953 cri.go:89] found id: "e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6"
	I0828 18:43:45.487903  506953 cri.go:89] found id: ""
	I0828 18:43:45.487932  506953 logs.go:276] 2 containers: [d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4 e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6]
	I0828 18:43:45.488019  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.492443  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.496066  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:43:45.496184  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:43:45.547025  506953 cri.go:89] found id: "39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7"
	I0828 18:43:45.547099  506953 cri.go:89] found id: "1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6"
	I0828 18:43:45.547118  506953 cri.go:89] found id: ""
	I0828 18:43:45.547143  506953 logs.go:276] 2 containers: [39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7 1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6]
	I0828 18:43:45.547235  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.553113  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.557602  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:43:45.557671  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:43:45.669527  506953 cri.go:89] found id: "ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69"
	I0828 18:43:45.669600  506953 cri.go:89] found id: "b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1"
	I0828 18:43:45.669620  506953 cri.go:89] found id: ""
	I0828 18:43:45.669642  506953 logs.go:276] 2 containers: [ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69 b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1]
	I0828 18:43:45.669730  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.673815  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.677575  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0828 18:43:45.677717  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:43:45.727588  506953 cri.go:89] found id: "f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b"
	I0828 18:43:45.727663  506953 cri.go:89] found id: "e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b"
	I0828 18:43:45.727683  506953 cri.go:89] found id: ""
	I0828 18:43:45.727710  506953 logs.go:276] 2 containers: [f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b]
	I0828 18:43:45.727798  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.731960  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.735682  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:43:45.735815  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:43:45.778170  506953 cri.go:89] found id: "3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5"
	I0828 18:43:45.778249  506953 cri.go:89] found id: ""
	I0828 18:43:45.778281  506953 logs.go:276] 1 containers: [3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5]
	I0828 18:43:45.778364  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.782201  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:43:45.782336  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:43:45.871370  506953 cri.go:89] found id: "16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9"
	I0828 18:43:45.871472  506953 cri.go:89] found id: "b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da"
	I0828 18:43:45.871492  506953 cri.go:89] found id: ""
	I0828 18:43:45.871519  506953 logs.go:276] 2 containers: [16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9 b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da]
	I0828 18:43:45.871612  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.880786  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:45.884382  506953 logs.go:123] Gathering logs for container status ...
	I0828 18:43:45.884456  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:43:45.947952  506953 logs.go:123] Gathering logs for etcd [24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec] ...
	I0828 18:43:45.948029  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec"
	I0828 18:43:46.019815  506953 logs.go:123] Gathering logs for kindnet [e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b] ...
	I0828 18:43:46.019931  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b"
	I0828 18:43:46.081805  506953 logs.go:123] Gathering logs for kube-scheduler [d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4] ...
	I0828 18:43:46.081891  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4"
	I0828 18:43:46.147525  506953 logs.go:123] Gathering logs for kube-scheduler [e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6] ...
	I0828 18:43:46.147601  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6"
	I0828 18:43:46.234768  506953 logs.go:123] Gathering logs for kube-controller-manager [b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1] ...
	I0828 18:43:46.234799  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1"
	I0828 18:43:46.314095  506953 logs.go:123] Gathering logs for kindnet [f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b] ...
	I0828 18:43:46.314134  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b"
	I0828 18:43:46.383950  506953 logs.go:123] Gathering logs for storage-provisioner [16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9] ...
	I0828 18:43:46.383989  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9"
	I0828 18:43:46.441889  506953 logs.go:123] Gathering logs for dmesg ...
	I0828 18:43:46.441926  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:43:46.461855  506953 logs.go:123] Gathering logs for coredns [64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1] ...
	I0828 18:43:46.461884  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1"
	I0828 18:43:46.503471  506953 logs.go:123] Gathering logs for etcd [2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127] ...
	I0828 18:43:46.503499  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127"
	I0828 18:43:46.561827  506953 logs.go:123] Gathering logs for kube-proxy [39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7] ...
	I0828 18:43:46.561858  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7"
	I0828 18:43:46.603506  506953 logs.go:123] Gathering logs for kube-proxy [1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6] ...
	I0828 18:43:46.603580  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6"
	I0828 18:43:42.448057  517188 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0828 18:43:42.448309  517188 start.go:159] libmachine.API.Create for "embed-certs-014747" (driver="docker")
	I0828 18:43:42.448339  517188 client.go:168] LocalClient.Create starting
	I0828 18:43:42.448460  517188 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem
	I0828 18:43:42.448508  517188 main.go:141] libmachine: Decoding PEM data...
	I0828 18:43:42.448537  517188 main.go:141] libmachine: Parsing certificate...
	I0828 18:43:42.448597  517188 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-294791/.minikube/certs/cert.pem
	I0828 18:43:42.448620  517188 main.go:141] libmachine: Decoding PEM data...
	I0828 18:43:42.448636  517188 main.go:141] libmachine: Parsing certificate...
	I0828 18:43:42.449024  517188 cli_runner.go:164] Run: docker network inspect embed-certs-014747 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0828 18:43:42.464566  517188 cli_runner.go:211] docker network inspect embed-certs-014747 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0828 18:43:42.464649  517188 network_create.go:284] running [docker network inspect embed-certs-014747] to gather additional debugging logs...
	I0828 18:43:42.464672  517188 cli_runner.go:164] Run: docker network inspect embed-certs-014747
	W0828 18:43:42.480199  517188 cli_runner.go:211] docker network inspect embed-certs-014747 returned with exit code 1
	I0828 18:43:42.480232  517188 network_create.go:287] error running [docker network inspect embed-certs-014747]: docker network inspect embed-certs-014747: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-014747 not found
	I0828 18:43:42.480246  517188 network_create.go:289] output of [docker network inspect embed-certs-014747]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-014747 not found
	
	** /stderr **
	I0828 18:43:42.480354  517188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0828 18:43:42.497113  517188 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-edb559462c84 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0a:50:c2:15} reservation:<nil>}
	I0828 18:43:42.497581  517188 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e35df5b4edfe IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f2:d5:56:ee} reservation:<nil>}
	I0828 18:43:42.497937  517188 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1c965bc59129 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:62:65:0d:64} reservation:<nil>}
	I0828 18:43:42.498479  517188 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018b0f90}
	I0828 18:43:42.498527  517188 network_create.go:124] attempt to create docker network embed-certs-014747 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0828 18:43:42.498611  517188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-014747 embed-certs-014747
	I0828 18:43:42.577888  517188 network_create.go:108] docker network embed-certs-014747 192.168.76.0/24 created
	I0828 18:43:42.577919  517188 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-014747" container
	I0828 18:43:42.577993  517188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0828 18:43:42.594026  517188 cli_runner.go:164] Run: docker volume create embed-certs-014747 --label name.minikube.sigs.k8s.io=embed-certs-014747 --label created_by.minikube.sigs.k8s.io=true
	I0828 18:43:42.614851  517188 oci.go:103] Successfully created a docker volume embed-certs-014747
	I0828 18:43:42.614948  517188 cli_runner.go:164] Run: docker run --rm --name embed-certs-014747-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-014747 --entrypoint /usr/bin/test -v embed-certs-014747:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib
	I0828 18:43:43.226624  517188 oci.go:107] Successfully prepared a docker volume embed-certs-014747
	I0828 18:43:43.226680  517188 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0828 18:43:43.226703  517188 kic.go:194] Starting extracting preloaded images to volume ...
	I0828 18:43:43.226798  517188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-014747:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir
	I0828 18:43:46.646422  506953 logs.go:123] Gathering logs for kubernetes-dashboard [3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5] ...
	I0828 18:43:46.646452  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5"
	I0828 18:43:46.694657  506953 logs.go:123] Gathering logs for storage-provisioner [b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da] ...
	I0828 18:43:46.694687  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da"
	I0828 18:43:46.734622  506953 logs.go:123] Gathering logs for containerd ...
	I0828 18:43:46.734659  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0828 18:43:46.796237  506953 logs.go:123] Gathering logs for kubelet ...
	I0828 18:43:46.796277  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 18:43:46.867037  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.295927     660 reflector.go:138] object-"kube-system"/"coredns-token-njr82": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-njr82" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.867264  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296027     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.867499  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296179     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-kglnx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kglnx" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.867714  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296232     660 reflector.go:138] object-"kube-system"/"kindnet-token-hjfcc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-hjfcc" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.867940  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296334     660 reflector.go:138] object-"kube-system"/"metrics-server-token-6hcmf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-6hcmf" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.868172  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296380     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-wcdgz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-wcdgz" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.868381  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296461     660 reflector.go:138] object-"default"/"default-token-j8qlp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-j8qlp" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.868591  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.304570     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:46.878017  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:20 old-k8s-version-807226 kubelet[660]: E0828 18:38:20.257115     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:46.878227  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:20 old-k8s-version-807226 kubelet[660]: E0828 18:38:20.830731     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.881091  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:31 old-k8s-version-807226 kubelet[660]: E0828 18:38:31.712112     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:46.882779  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:42 old-k8s-version-807226 kubelet[660]: E0828 18:38:42.703684     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.883440  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:43 old-k8s-version-807226 kubelet[660]: E0828 18:38:43.913087     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.883774  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:44 old-k8s-version-807226 kubelet[660]: E0828 18:38:44.913852     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.884556  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:48 old-k8s-version-807226 kubelet[660]: E0828 18:38:48.925862     660 pod_workers.go:191] Error syncing pod 24508be5-83e6-4672-82ce-b943d2db673c ("storage-provisioner_kube-system(24508be5-83e6-4672-82ce-b943d2db673c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(24508be5-83e6-4672-82ce-b943d2db673c)"
	W0828 18:43:46.884884  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:49 old-k8s-version-807226 kubelet[660]: E0828 18:38:49.527740     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.887383  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:55 old-k8s-version-807226 kubelet[660]: E0828 18:38:55.711644     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:46.888497  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:03 old-k8s-version-807226 kubelet[660]: E0828 18:39:03.033822     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.888690  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:08 old-k8s-version-807226 kubelet[660]: E0828 18:39:08.701849     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.889023  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:09 old-k8s-version-807226 kubelet[660]: E0828 18:39:09.528085     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.889652  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:23 old-k8s-version-807226 kubelet[660]: E0828 18:39:23.137950     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.889841  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:23 old-k8s-version-807226 kubelet[660]: E0828 18:39:23.701943     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.890169  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:29 old-k8s-version-807226 kubelet[660]: E0828 18:39:29.527775     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.892795  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:36 old-k8s-version-807226 kubelet[660]: E0828 18:39:36.713928     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:46.893136  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:42 old-k8s-version-807226 kubelet[660]: E0828 18:39:42.701847     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.893328  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:47 old-k8s-version-807226 kubelet[660]: E0828 18:39:47.701990     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.893661  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:56 old-k8s-version-807226 kubelet[660]: E0828 18:39:56.701813     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.893847  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:59 old-k8s-version-807226 kubelet[660]: E0828 18:39:59.703610     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.894439  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:09 old-k8s-version-807226 kubelet[660]: E0828 18:40:09.269052     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.894771  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:10 old-k8s-version-807226 kubelet[660]: E0828 18:40:10.274200     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.894956  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:13 old-k8s-version-807226 kubelet[660]: E0828 18:40:13.701637     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.895289  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:22 old-k8s-version-807226 kubelet[660]: E0828 18:40:22.700967     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.895483  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:28 old-k8s-version-807226 kubelet[660]: E0828 18:40:28.701621     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.895811  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:35 old-k8s-version-807226 kubelet[660]: E0828 18:40:35.701534     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.895996  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:41 old-k8s-version-807226 kubelet[660]: E0828 18:40:41.701505     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.896328  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:46 old-k8s-version-807226 kubelet[660]: E0828 18:40:46.700951     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.896520  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:55 old-k8s-version-807226 kubelet[660]: E0828 18:40:55.701917     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.896871  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:01 old-k8s-version-807226 kubelet[660]: E0828 18:41:01.700919     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.899343  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:08 old-k8s-version-807226 kubelet[660]: E0828 18:41:08.714113     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:46.899753  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:16 old-k8s-version-807226 kubelet[660]: E0828 18:41:16.701046     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.899944  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:23 old-k8s-version-807226 kubelet[660]: E0828 18:41:23.701588     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.900537  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:32 old-k8s-version-807226 kubelet[660]: E0828 18:41:32.512783     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.900743  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:38 old-k8s-version-807226 kubelet[660]: E0828 18:41:38.701192     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.901089  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:39 old-k8s-version-807226 kubelet[660]: E0828 18:41:39.527936     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.901275  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:49 old-k8s-version-807226 kubelet[660]: E0828 18:41:49.703090     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.901609  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:50 old-k8s-version-807226 kubelet[660]: E0828 18:41:50.700970     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.901795  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:00 old-k8s-version-807226 kubelet[660]: E0828 18:42:00.701342     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.902122  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:01 old-k8s-version-807226 kubelet[660]: E0828 18:42:01.700946     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.902307  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:11 old-k8s-version-807226 kubelet[660]: E0828 18:42:11.701847     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.902635  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:13 old-k8s-version-807226 kubelet[660]: E0828 18:42:13.701572     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.902845  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:23 old-k8s-version-807226 kubelet[660]: E0828 18:42:23.704011     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.905145  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:25 old-k8s-version-807226 kubelet[660]: E0828 18:42:25.701382     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.905349  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:34 old-k8s-version-807226 kubelet[660]: E0828 18:42:34.701350     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.905683  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:38 old-k8s-version-807226 kubelet[660]: E0828 18:42:38.701282     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.905871  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:49 old-k8s-version-807226 kubelet[660]: E0828 18:42:49.701288     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.906205  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:49 old-k8s-version-807226 kubelet[660]: E0828 18:42:49.702400     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.906536  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:02 old-k8s-version-807226 kubelet[660]: E0828 18:43:02.701048     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.906722  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:04 old-k8s-version-807226 kubelet[660]: E0828 18:43:04.701421     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.907061  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:13 old-k8s-version-807226 kubelet[660]: E0828 18:43:13.704867     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.907248  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:15 old-k8s-version-807226 kubelet[660]: E0828 18:43:15.701573     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.907593  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:24 old-k8s-version-807226 kubelet[660]: E0828 18:43:24.700926     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.907780  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:27 old-k8s-version-807226 kubelet[660]: E0828 18:43:27.701547     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:46.908111  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.701771     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:46.908298  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.702673     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0828 18:43:46.908309  506953 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:43:46.908324  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:43:47.253484  506953 logs.go:123] Gathering logs for coredns [1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233] ...
	I0828 18:43:47.253521  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233"
	I0828 18:43:47.290727  506953 logs.go:123] Gathering logs for kube-controller-manager [ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69] ...
	I0828 18:43:47.290767  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69"
	I0828 18:43:47.365736  506953 logs.go:123] Gathering logs for kube-apiserver [ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75] ...
	I0828 18:43:47.365773  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75"
	I0828 18:43:47.448039  506953 logs.go:123] Gathering logs for kube-apiserver [a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3] ...
	I0828 18:43:47.448074  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3"
	I0828 18:43:47.518932  506953 out.go:358] Setting ErrFile to fd 2...
	I0828 18:43:47.518964  506953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 18:43:47.519016  506953 out.go:270] X Problems detected in kubelet:
	W0828 18:43:47.519030  506953 out.go:270]   Aug 28 18:43:15 old-k8s-version-807226 kubelet[660]: E0828 18:43:15.701573     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:47.519037  506953 out.go:270]   Aug 28 18:43:24 old-k8s-version-807226 kubelet[660]: E0828 18:43:24.700926     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:47.519048  506953 out.go:270]   Aug 28 18:43:27 old-k8s-version-807226 kubelet[660]: E0828 18:43:27.701547     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:47.519054  506953 out.go:270]   Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.701771     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:47.519065  506953 out.go:270]   Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.702673     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0828 18:43:47.519088  506953 out.go:358] Setting ErrFile to fd 2...
	I0828 18:43:47.519093  506953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:43:47.633411  517188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-014747:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir: (4.406573027s)
	I0828 18:43:47.633452  517188 kic.go:203] duration metric: took 4.406744989s to extract preloaded images to volume ...
	W0828 18:43:47.633588  517188 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0828 18:43:47.633718  517188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0828 18:43:47.685892  517188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-014747 --name embed-certs-014747 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-014747 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-014747 --network embed-certs-014747 --ip 192.168.76.2 --volume embed-certs-014747:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce
	I0828 18:43:48.164838  517188 cli_runner.go:164] Run: docker container inspect embed-certs-014747 --format={{.State.Running}}
	I0828 18:43:48.198061  517188 cli_runner.go:164] Run: docker container inspect embed-certs-014747 --format={{.State.Status}}
	I0828 18:43:48.223238  517188 cli_runner.go:164] Run: docker exec embed-certs-014747 stat /var/lib/dpkg/alternatives/iptables
	I0828 18:43:48.303695  517188 oci.go:144] the created container "embed-certs-014747" has a running status.
	I0828 18:43:48.303723  517188 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19529-294791/.minikube/machines/embed-certs-014747/id_rsa...
	I0828 18:43:48.628302  517188 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19529-294791/.minikube/machines/embed-certs-014747/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0828 18:43:48.659210  517188 cli_runner.go:164] Run: docker container inspect embed-certs-014747 --format={{.State.Status}}
	I0828 18:43:48.684380  517188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0828 18:43:48.684400  517188 kic_runner.go:114] Args: [docker exec --privileged embed-certs-014747 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0828 18:43:48.762074  517188 cli_runner.go:164] Run: docker container inspect embed-certs-014747 --format={{.State.Status}}
	I0828 18:43:48.791624  517188 machine.go:93] provisionDockerMachine start ...
	I0828 18:43:48.791733  517188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-014747
	I0828 18:43:48.819566  517188 main.go:141] libmachine: Using SSH client type: native
	I0828 18:43:48.819843  517188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0828 18:43:48.819853  517188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:43:48.820458  517188 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0828 18:43:51.958858  517188 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-014747
	
	I0828 18:43:51.958884  517188 ubuntu.go:169] provisioning hostname "embed-certs-014747"
	I0828 18:43:51.958952  517188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-014747
	I0828 18:43:51.976720  517188 main.go:141] libmachine: Using SSH client type: native
	I0828 18:43:51.976977  517188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0828 18:43:51.976994  517188 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-014747 && echo "embed-certs-014747" | sudo tee /etc/hostname
	I0828 18:43:52.132807  517188 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-014747
	
	I0828 18:43:52.132892  517188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-014747
	I0828 18:43:52.151915  517188 main.go:141] libmachine: Using SSH client type: native
	I0828 18:43:52.152176  517188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0828 18:43:52.152200  517188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-014747' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-014747/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-014747' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:43:52.291424  517188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:43:52.291456  517188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19529-294791/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-294791/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-294791/.minikube}
	I0828 18:43:52.291481  517188 ubuntu.go:177] setting up certificates
	I0828 18:43:52.291490  517188 provision.go:84] configureAuth start
	I0828 18:43:52.291549  517188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-014747
	I0828 18:43:52.308276  517188 provision.go:143] copyHostCerts
	I0828 18:43:52.308349  517188 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-294791/.minikube/key.pem, removing ...
	I0828 18:43:52.308365  517188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-294791/.minikube/key.pem
	I0828 18:43:52.308444  517188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-294791/.minikube/key.pem (1679 bytes)
	I0828 18:43:52.308550  517188 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-294791/.minikube/ca.pem, removing ...
	I0828 18:43:52.308618  517188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-294791/.minikube/ca.pem
	I0828 18:43:52.308673  517188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-294791/.minikube/ca.pem (1082 bytes)
	I0828 18:43:52.308760  517188 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-294791/.minikube/cert.pem, removing ...
	I0828 18:43:52.308773  517188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-294791/.minikube/cert.pem
	I0828 18:43:52.308804  517188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-294791/.minikube/cert.pem (1123 bytes)
	I0828 18:43:52.308867  517188 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-294791/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca-key.pem org=jenkins.embed-certs-014747 san=[127.0.0.1 192.168.76.2 embed-certs-014747 localhost minikube]
	I0828 18:43:53.583615  517188 provision.go:177] copyRemoteCerts
	I0828 18:43:53.583686  517188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:43:53.583727  517188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-014747
	I0828 18:43:53.600353  517188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/embed-certs-014747/id_rsa Username:docker}
	I0828 18:43:53.696256  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0828 18:43:53.728231  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0828 18:43:53.752065  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:43:53.778717  517188 provision.go:87] duration metric: took 1.487211837s to configureAuth
	I0828 18:43:53.778786  517188 ubuntu.go:193] setting minikube options for container-runtime
	I0828 18:43:53.778998  517188 config.go:182] Loaded profile config "embed-certs-014747": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0828 18:43:53.779015  517188 machine.go:96] duration metric: took 4.987365893s to provisionDockerMachine
	I0828 18:43:53.779023  517188 client.go:171] duration metric: took 11.330677389s to LocalClient.Create
	I0828 18:43:53.779050  517188 start.go:167] duration metric: took 11.330741454s to libmachine.API.Create "embed-certs-014747"
	I0828 18:43:53.779064  517188 start.go:293] postStartSetup for "embed-certs-014747" (driver="docker")
	I0828 18:43:53.779074  517188 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:43:53.779139  517188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:43:53.779191  517188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-014747
	I0828 18:43:53.797196  517188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/embed-certs-014747/id_rsa Username:docker}
	I0828 18:43:53.896548  517188 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:43:53.899929  517188 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0828 18:43:53.900011  517188 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0828 18:43:53.900048  517188 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0828 18:43:53.900062  517188 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0828 18:43:53.900074  517188 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-294791/.minikube/addons for local assets ...
	I0828 18:43:53.900133  517188 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-294791/.minikube/files for local assets ...
	I0828 18:43:53.900223  517188 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-294791/.minikube/files/etc/ssl/certs/3001822.pem -> 3001822.pem in /etc/ssl/certs
	I0828 18:43:53.900335  517188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:43:53.909248  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/files/etc/ssl/certs/3001822.pem --> /etc/ssl/certs/3001822.pem (1708 bytes)
	I0828 18:43:53.934637  517188 start.go:296] duration metric: took 155.559322ms for postStartSetup
	I0828 18:43:53.935009  517188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-014747
	I0828 18:43:53.952069  517188 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/config.json ...
	I0828 18:43:53.952358  517188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 18:43:53.952410  517188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-014747
	I0828 18:43:53.969185  517188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/embed-certs-014747/id_rsa Username:docker}
	I0828 18:43:54.064542  517188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0828 18:43:54.069795  517188 start.go:128] duration metric: took 11.623989935s to createHost
	I0828 18:43:54.069820  517188 start.go:83] releasing machines lock for "embed-certs-014747", held for 11.624141943s
	I0828 18:43:54.069897  517188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-014747
	I0828 18:43:54.087568  517188 ssh_runner.go:195] Run: cat /version.json
	I0828 18:43:54.087622  517188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-014747
	I0828 18:43:54.087971  517188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:43:54.088043  517188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-014747
	I0828 18:43:54.118516  517188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/embed-certs-014747/id_rsa Username:docker}
	I0828 18:43:54.127419  517188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/embed-certs-014747/id_rsa Username:docker}
	I0828 18:43:54.364309  517188 ssh_runner.go:195] Run: systemctl --version
	I0828 18:43:54.369286  517188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0828 18:43:54.373350  517188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0828 18:43:54.399271  517188 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0828 18:43:54.399450  517188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:43:54.429662  517188 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0828 18:43:54.429728  517188 start.go:495] detecting cgroup driver to use...
	I0828 18:43:54.429776  517188 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0828 18:43:54.429862  517188 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0828 18:43:54.442289  517188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 18:43:54.454220  517188 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:43:54.454290  517188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:43:54.468340  517188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:43:54.483522  517188 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:43:54.565417  517188 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:43:54.662775  517188 docker.go:233] disabling docker service ...
	I0828 18:43:54.662896  517188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:43:54.686497  517188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:43:54.699036  517188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:43:54.792970  517188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:43:54.888795  517188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:43:54.900565  517188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:43:54.918440  517188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0828 18:43:54.928784  517188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0828 18:43:54.939180  517188 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0828 18:43:54.939297  517188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0828 18:43:54.951873  517188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 18:43:54.961847  517188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0828 18:43:54.971505  517188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 18:43:54.981967  517188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:43:54.991651  517188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0828 18:43:55.001645  517188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0828 18:43:55.030229  517188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0828 18:43:55.042391  517188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:43:55.053923  517188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:43:55.067282  517188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:43:55.153544  517188 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0828 18:43:55.293525  517188 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0828 18:43:55.293598  517188 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0828 18:43:55.297331  517188 start.go:563] Will wait 60s for crictl version
	I0828 18:43:55.297396  517188 ssh_runner.go:195] Run: which crictl
	I0828 18:43:55.300834  517188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:43:55.345426  517188 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0828 18:43:55.345497  517188 ssh_runner.go:195] Run: containerd --version
	I0828 18:43:55.368679  517188 ssh_runner.go:195] Run: containerd --version
	I0828 18:43:55.398284  517188 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0828 18:43:55.400401  517188 cli_runner.go:164] Run: docker network inspect embed-certs-014747 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0828 18:43:55.416167  517188 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0828 18:43:55.419701  517188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:43:55.431075  517188 kubeadm.go:883] updating cluster {Name:embed-certs-014747 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-014747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:43:55.431199  517188 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0828 18:43:55.431264  517188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:43:55.467237  517188 containerd.go:627] all images are preloaded for containerd runtime.
	I0828 18:43:55.467262  517188 containerd.go:534] Images already preloaded, skipping extraction
	I0828 18:43:55.467320  517188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:43:55.503121  517188 containerd.go:627] all images are preloaded for containerd runtime.
	I0828 18:43:55.503146  517188 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:43:55.503156  517188 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.0 containerd true true} ...
	I0828 18:43:55.503255  517188 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-014747 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-014747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:43:55.503324  517188 ssh_runner.go:195] Run: sudo crictl info
	I0828 18:43:55.541218  517188 cni.go:84] Creating CNI manager for ""
	I0828 18:43:55.541246  517188 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0828 18:43:55.541259  517188 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:43:55.541281  517188 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-014747 NodeName:embed-certs-014747 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:43:55.541448  517188 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-014747"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:43:55.541514  517188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:43:55.550435  517188 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:43:55.550528  517188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:43:55.559360  517188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0828 18:43:55.577975  517188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:43:55.596656  517188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0828 18:43:55.614091  517188 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0828 18:43:55.617379  517188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:43:55.628949  517188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:43:55.723344  517188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:43:55.739889  517188 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747 for IP: 192.168.76.2
	I0828 18:43:55.739922  517188 certs.go:194] generating shared ca certs ...
	I0828 18:43:55.739940  517188 certs.go:226] acquiring lock for ca certs: {Name:mke663c906ba93beaf12a5613882d3e46b93d46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:55.740084  517188 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-294791/.minikube/ca.key
	I0828 18:43:55.740135  517188 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.key
	I0828 18:43:55.740147  517188 certs.go:256] generating profile certs ...
	I0828 18:43:55.740211  517188 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/client.key
	I0828 18:43:55.740228  517188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/client.crt with IP's: []
	I0828 18:43:56.319992  517188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/client.crt ...
	I0828 18:43:56.320023  517188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/client.crt: {Name:mka7b8aea19c2e4cc0cf36add144c06fc14d1d51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:56.320259  517188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/client.key ...
	I0828 18:43:56.320273  517188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/client.key: {Name:mkb844099118d42ee7250360687217f68edf5cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:56.320760  517188 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.key.0b9d5a5c
	I0828 18:43:56.320781  517188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.crt.0b9d5a5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0828 18:43:56.842998  517188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.crt.0b9d5a5c ...
	I0828 18:43:56.843029  517188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.crt.0b9d5a5c: {Name:mk82ebe7931112ed47a0e1b22bd0827e744ba797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:56.843919  517188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.key.0b9d5a5c ...
	I0828 18:43:56.843938  517188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.key.0b9d5a5c: {Name:mkd84a321e131eee162656fd5ef28e3c56925899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:56.844401  517188 certs.go:381] copying /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.crt.0b9d5a5c -> /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.crt
	I0828 18:43:56.844503  517188 certs.go:385] copying /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.key.0b9d5a5c -> /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.key
	I0828 18:43:56.844569  517188 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/proxy-client.key
	I0828 18:43:56.844602  517188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/proxy-client.crt with IP's: []
	I0828 18:43:57.119245  517188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/proxy-client.crt ...
	I0828 18:43:57.119276  517188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/proxy-client.crt: {Name:mk1e6f6c07ff582327125b5211d18aa475ac604d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:57.120250  517188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/proxy-client.key ...
	I0828 18:43:57.120275  517188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/proxy-client.key: {Name:mka27852679c82d28e0c34db76085f35addf50a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:57.121269  517188 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/300182.pem (1338 bytes)
	W0828 18:43:57.121321  517188 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-294791/.minikube/certs/300182_empty.pem, impossibly tiny 0 bytes
	I0828 18:43:57.121337  517188 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:43:57.121366  517188 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/ca.pem (1082 bytes)
	I0828 18:43:57.121394  517188 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:43:57.121422  517188 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/certs/key.pem (1679 bytes)
	I0828 18:43:57.121474  517188 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-294791/.minikube/files/etc/ssl/certs/3001822.pem (1708 bytes)
	I0828 18:43:57.122130  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:43:57.150102  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0828 18:43:57.180588  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:43:57.210152  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:43:57.238802  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0828 18:43:57.267257  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 18:43:57.293321  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:43:57.318043  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/embed-certs-014747/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 18:43:57.345112  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:43:57.369117  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/certs/300182.pem --> /usr/share/ca-certificates/300182.pem (1338 bytes)
	I0828 18:43:57.394943  517188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-294791/.minikube/files/etc/ssl/certs/3001822.pem --> /usr/share/ca-certificates/3001822.pem (1708 bytes)
	I0828 18:43:57.420380  517188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:43:57.439870  517188 ssh_runner.go:195] Run: openssl version
	I0828 18:43:57.445433  517188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:43:57.454674  517188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:43:57.458306  517188 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:43:57.458393  517188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:43:57.465168  517188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:43:57.474451  517188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300182.pem && ln -fs /usr/share/ca-certificates/300182.pem /etc/ssl/certs/300182.pem"
	I0828 18:43:57.484825  517188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300182.pem
	I0828 18:43:57.488393  517188 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:59 /usr/share/ca-certificates/300182.pem
	I0828 18:43:57.488513  517188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300182.pem
	I0828 18:43:57.495493  517188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300182.pem /etc/ssl/certs/51391683.0"
	I0828 18:43:57.504771  517188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3001822.pem && ln -fs /usr/share/ca-certificates/3001822.pem /etc/ssl/certs/3001822.pem"
	I0828 18:43:57.514145  517188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3001822.pem
	I0828 18:43:57.517849  517188 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:59 /usr/share/ca-certificates/3001822.pem
	I0828 18:43:57.517932  517188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3001822.pem
	I0828 18:43:57.526327  517188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3001822.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:43:57.540734  517188 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:43:57.545934  517188 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 18:43:57.546013  517188 kubeadm.go:392] StartCluster: {Name:embed-certs-014747 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-014747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:43:57.546115  517188 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0828 18:43:57.546200  517188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:43:57.600986  517188 cri.go:89] found id: ""
	I0828 18:43:57.601078  517188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:43:57.612739  517188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:43:57.624135  517188 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0828 18:43:57.624236  517188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:43:57.636182  517188 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:43:57.636248  517188 kubeadm.go:157] found existing configuration files:
	
	I0828 18:43:57.636312  517188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:43:57.645841  517188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:43:57.645966  517188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:43:57.655419  517188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:43:57.668685  517188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:43:57.668780  517188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:43:57.680846  517188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:43:57.692820  517188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:43:57.692927  517188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:43:57.706351  517188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:43:57.722250  517188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:43:57.722340  517188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:43:57.734203  517188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0828 18:43:57.817563  517188 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 18:43:57.817910  517188 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:43:57.844898  517188 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0828 18:43:57.845056  517188 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0828 18:43:57.845128  517188 kubeadm.go:310] OS: Linux
	I0828 18:43:57.845208  517188 kubeadm.go:310] CGROUPS_CPU: enabled
	I0828 18:43:57.845262  517188 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0828 18:43:57.845312  517188 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0828 18:43:57.845363  517188 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0828 18:43:57.845413  517188 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0828 18:43:57.845465  517188 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0828 18:43:57.845512  517188 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0828 18:43:57.845563  517188 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0828 18:43:57.845612  517188 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0828 18:43:57.954704  517188 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:43:57.954871  517188 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:43:57.954979  517188 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 18:43:57.959820  517188 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:43:57.519796  506953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:57.536692  506953 api_server.go:72] duration metric: took 5m57.826170676s to wait for apiserver process to appear ...
	I0828 18:43:57.536720  506953 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:43:57.536758  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:43:57.536817  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:43:57.595661  506953 cri.go:89] found id: "ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75"
	I0828 18:43:57.595682  506953 cri.go:89] found id: "a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3"
	I0828 18:43:57.595687  506953 cri.go:89] found id: ""
	I0828 18:43:57.595694  506953 logs.go:276] 2 containers: [ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75 a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3]
	I0828 18:43:57.595754  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.599641  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.605878  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0828 18:43:57.605943  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:43:57.663987  506953 cri.go:89] found id: "2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127"
	I0828 18:43:57.664005  506953 cri.go:89] found id: "24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec"
	I0828 18:43:57.664010  506953 cri.go:89] found id: ""
	I0828 18:43:57.664017  506953 logs.go:276] 2 containers: [2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127 24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec]
	I0828 18:43:57.664076  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.668859  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.672658  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0828 18:43:57.672723  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:43:57.748334  506953 cri.go:89] found id: "1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233"
	I0828 18:43:57.748354  506953 cri.go:89] found id: "64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1"
	I0828 18:43:57.748358  506953 cri.go:89] found id: ""
	I0828 18:43:57.748365  506953 logs.go:276] 2 containers: [1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233 64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1]
	I0828 18:43:57.748419  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.752475  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.756288  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:43:57.756354  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:43:57.805243  506953 cri.go:89] found id: "d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4"
	I0828 18:43:57.805262  506953 cri.go:89] found id: "e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6"
	I0828 18:43:57.805267  506953 cri.go:89] found id: ""
	I0828 18:43:57.805274  506953 logs.go:276] 2 containers: [d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4 e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6]
	I0828 18:43:57.805328  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.809042  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.813760  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:43:57.813826  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:43:57.878594  506953 cri.go:89] found id: "39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7"
	I0828 18:43:57.878669  506953 cri.go:89] found id: "1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6"
	I0828 18:43:57.878689  506953 cri.go:89] found id: ""
	I0828 18:43:57.878716  506953 logs.go:276] 2 containers: [39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7 1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6]
	I0828 18:43:57.878795  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.883014  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.890506  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:43:57.890624  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:43:57.972627  506953 cri.go:89] found id: "ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69"
	I0828 18:43:57.972698  506953 cri.go:89] found id: "b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1"
	I0828 18:43:57.972717  506953 cri.go:89] found id: ""
	I0828 18:43:57.972742  506953 logs.go:276] 2 containers: [ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69 b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1]
	I0828 18:43:57.972820  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.978344  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:57.982006  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0828 18:43:57.982108  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:43:58.032507  506953 cri.go:89] found id: "f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b"
	I0828 18:43:58.032586  506953 cri.go:89] found id: "e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b"
	I0828 18:43:58.032608  506953 cri.go:89] found id: ""
	I0828 18:43:58.032636  506953 logs.go:276] 2 containers: [f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b]
	I0828 18:43:58.032717  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:58.037284  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:58.041323  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:43:58.041439  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:43:58.100965  506953 cri.go:89] found id: "16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9"
	I0828 18:43:58.101034  506953 cri.go:89] found id: "b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da"
	I0828 18:43:58.101059  506953 cri.go:89] found id: ""
	I0828 18:43:58.101086  506953 logs.go:276] 2 containers: [16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9 b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da]
	I0828 18:43:58.101161  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:58.104945  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:58.108551  506953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:43:58.108643  506953 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:43:58.202118  506953 cri.go:89] found id: "3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5"
	I0828 18:43:58.202187  506953 cri.go:89] found id: ""
	I0828 18:43:58.202209  506953 logs.go:276] 1 containers: [3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5]
	I0828 18:43:58.202282  506953 ssh_runner.go:195] Run: which crictl
	I0828 18:43:58.206252  506953 logs.go:123] Gathering logs for storage-provisioner [16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9] ...
	I0828 18:43:58.206311  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9"
	I0828 18:43:58.270553  506953 logs.go:123] Gathering logs for storage-provisioner [b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da] ...
	I0828 18:43:58.270627  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da"
	I0828 18:43:58.323427  506953 logs.go:123] Gathering logs for kubelet ...
	I0828 18:43:58.323499  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 18:43:58.379618  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.295927     660 reflector.go:138] object-"kube-system"/"coredns-token-njr82": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-njr82" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.379877  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296027     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.380114  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296179     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-kglnx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kglnx" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.380389  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296232     660 reflector.go:138] object-"kube-system"/"kindnet-token-hjfcc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-hjfcc" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.380643  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296334     660 reflector.go:138] object-"kube-system"/"metrics-server-token-6hcmf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-6hcmf" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.380890  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296380     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-wcdgz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-wcdgz" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.381117  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.296461     660 reflector.go:138] object-"default"/"default-token-j8qlp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-j8qlp" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.381343  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:16 old-k8s-version-807226 kubelet[660]: E0828 18:38:16.304570     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-807226" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-807226' and this object
	W0828 18:43:58.390666  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:20 old-k8s-version-807226 kubelet[660]: E0828 18:38:20.257115     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:58.390891  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:20 old-k8s-version-807226 kubelet[660]: E0828 18:38:20.830731     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.393710  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:31 old-k8s-version-807226 kubelet[660]: E0828 18:38:31.712112     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:58.395431  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:42 old-k8s-version-807226 kubelet[660]: E0828 18:38:42.703684     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.396041  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:43 old-k8s-version-807226 kubelet[660]: E0828 18:38:43.913087     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.396398  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:44 old-k8s-version-807226 kubelet[660]: E0828 18:38:44.913852     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.397187  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:48 old-k8s-version-807226 kubelet[660]: E0828 18:38:48.925862     660 pod_workers.go:191] Error syncing pod 24508be5-83e6-4672-82ce-b943d2db673c ("storage-provisioner_kube-system(24508be5-83e6-4672-82ce-b943d2db673c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(24508be5-83e6-4672-82ce-b943d2db673c)"
	W0828 18:43:58.397532  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:49 old-k8s-version-807226 kubelet[660]: E0828 18:38:49.527740     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.400075  506953 logs.go:138] Found kubelet problem: Aug 28 18:38:55 old-k8s-version-807226 kubelet[660]: E0828 18:38:55.711644     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:58.401166  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:03 old-k8s-version-807226 kubelet[660]: E0828 18:39:03.033822     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.401370  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:08 old-k8s-version-807226 kubelet[660]: E0828 18:39:08.701849     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.401718  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:09 old-k8s-version-807226 kubelet[660]: E0828 18:39:09.528085     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.402331  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:23 old-k8s-version-807226 kubelet[660]: E0828 18:39:23.137950     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.402533  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:23 old-k8s-version-807226 kubelet[660]: E0828 18:39:23.701943     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.402874  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:29 old-k8s-version-807226 kubelet[660]: E0828 18:39:29.527775     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.405316  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:36 old-k8s-version-807226 kubelet[660]: E0828 18:39:36.713928     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:58.405660  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:42 old-k8s-version-807226 kubelet[660]: E0828 18:39:42.701847     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.405864  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:47 old-k8s-version-807226 kubelet[660]: E0828 18:39:47.701990     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.406221  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:56 old-k8s-version-807226 kubelet[660]: E0828 18:39:56.701813     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.406424  506953 logs.go:138] Found kubelet problem: Aug 28 18:39:59 old-k8s-version-807226 kubelet[660]: E0828 18:39:59.703610     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.407057  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:09 old-k8s-version-807226 kubelet[660]: E0828 18:40:09.269052     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.407409  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:10 old-k8s-version-807226 kubelet[660]: E0828 18:40:10.274200     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.407613  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:13 old-k8s-version-807226 kubelet[660]: E0828 18:40:13.701637     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.407960  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:22 old-k8s-version-807226 kubelet[660]: E0828 18:40:22.700967     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.408160  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:28 old-k8s-version-807226 kubelet[660]: E0828 18:40:28.701621     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.408507  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:35 old-k8s-version-807226 kubelet[660]: E0828 18:40:35.701534     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.408709  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:41 old-k8s-version-807226 kubelet[660]: E0828 18:40:41.701505     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.411739  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:46 old-k8s-version-807226 kubelet[660]: E0828 18:40:46.700951     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.411947  506953 logs.go:138] Found kubelet problem: Aug 28 18:40:55 old-k8s-version-807226 kubelet[660]: E0828 18:40:55.701917     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.412295  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:01 old-k8s-version-807226 kubelet[660]: E0828 18:41:01.700919     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.414735  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:08 old-k8s-version-807226 kubelet[660]: E0828 18:41:08.714113     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0828 18:43:58.415082  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:16 old-k8s-version-807226 kubelet[660]: E0828 18:41:16.701046     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.415285  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:23 old-k8s-version-807226 kubelet[660]: E0828 18:41:23.701588     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.415901  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:32 old-k8s-version-807226 kubelet[660]: E0828 18:41:32.512783     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.416107  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:38 old-k8s-version-807226 kubelet[660]: E0828 18:41:38.701192     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.416452  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:39 old-k8s-version-807226 kubelet[660]: E0828 18:41:39.527936     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.416664  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:49 old-k8s-version-807226 kubelet[660]: E0828 18:41:49.703090     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.417006  506953 logs.go:138] Found kubelet problem: Aug 28 18:41:50 old-k8s-version-807226 kubelet[660]: E0828 18:41:50.700970     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.417210  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:00 old-k8s-version-807226 kubelet[660]: E0828 18:42:00.701342     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.417552  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:01 old-k8s-version-807226 kubelet[660]: E0828 18:42:01.700946     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.417752  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:11 old-k8s-version-807226 kubelet[660]: E0828 18:42:11.701847     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.418094  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:13 old-k8s-version-807226 kubelet[660]: E0828 18:42:13.701572     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.418297  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:23 old-k8s-version-807226 kubelet[660]: E0828 18:42:23.704011     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.418644  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:25 old-k8s-version-807226 kubelet[660]: E0828 18:42:25.701382     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.418865  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:34 old-k8s-version-807226 kubelet[660]: E0828 18:42:34.701350     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.419208  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:38 old-k8s-version-807226 kubelet[660]: E0828 18:42:38.701282     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.419424  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:49 old-k8s-version-807226 kubelet[660]: E0828 18:42:49.701288     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.419767  506953 logs.go:138] Found kubelet problem: Aug 28 18:42:49 old-k8s-version-807226 kubelet[660]: E0828 18:42:49.702400     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.420117  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:02 old-k8s-version-807226 kubelet[660]: E0828 18:43:02.701048     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.420323  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:04 old-k8s-version-807226 kubelet[660]: E0828 18:43:04.701421     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.420686  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:13 old-k8s-version-807226 kubelet[660]: E0828 18:43:13.704867     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.420901  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:15 old-k8s-version-807226 kubelet[660]: E0828 18:43:15.701573     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.421243  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:24 old-k8s-version-807226 kubelet[660]: E0828 18:43:24.700926     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.421448  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:27 old-k8s-version-807226 kubelet[660]: E0828 18:43:27.701547     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.421790  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.701771     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.421993  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.702673     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:58.422337  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:50 old-k8s-version-807226 kubelet[660]: E0828 18:43:50.701089     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:58.424806  506953 logs.go:138] Found kubelet problem: Aug 28 18:43:52 old-k8s-version-807226 kubelet[660]: E0828 18:43:52.717096     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I0828 18:43:58.424841  506953 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:43:58.424869  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:43:58.606810  506953 logs.go:123] Gathering logs for kube-apiserver [ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75] ...
	I0828 18:43:58.606883  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75"
	I0828 18:43:58.721991  506953 logs.go:123] Gathering logs for etcd [24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec] ...
	I0828 18:43:58.722067  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec"
	I0828 18:43:58.802082  506953 logs.go:123] Gathering logs for coredns [64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1] ...
	I0828 18:43:58.802157  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1"
	I0828 18:43:58.887966  506953 logs.go:123] Gathering logs for etcd [2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127] ...
	I0828 18:43:58.887996  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127"
	I0828 18:43:58.956840  506953 logs.go:123] Gathering logs for kube-proxy [1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6] ...
	I0828 18:43:58.956921  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6"
	I0828 18:43:59.038966  506953 logs.go:123] Gathering logs for container status ...
	I0828 18:43:59.039045  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:43:59.189856  506953 logs.go:123] Gathering logs for dmesg ...
	I0828 18:43:59.189887  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:43:59.206061  506953 logs.go:123] Gathering logs for coredns [1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233] ...
	I0828 18:43:59.206131  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233"
	I0828 18:43:59.261030  506953 logs.go:123] Gathering logs for kube-scheduler [e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6] ...
	I0828 18:43:59.261110  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6"
	I0828 18:43:59.311805  506953 logs.go:123] Gathering logs for kindnet [e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b] ...
	I0828 18:43:59.311877  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b"
	I0828 18:43:59.364046  506953 logs.go:123] Gathering logs for kubernetes-dashboard [3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5] ...
	I0828 18:43:59.364075  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5"
	I0828 18:43:59.411769  506953 logs.go:123] Gathering logs for kindnet [f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b] ...
	I0828 18:43:59.411800  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b"
	I0828 18:43:59.494157  506953 logs.go:123] Gathering logs for containerd ...
	I0828 18:43:59.494189  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0828 18:43:59.563919  506953 logs.go:123] Gathering logs for kube-apiserver [a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3] ...
	I0828 18:43:59.563956  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3"
	I0828 18:43:59.630887  506953 logs.go:123] Gathering logs for kube-scheduler [d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4] ...
	I0828 18:43:59.630923  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4"
	I0828 18:43:59.734344  506953 logs.go:123] Gathering logs for kube-proxy [39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7] ...
	I0828 18:43:59.734376  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7"
	I0828 18:43:59.795673  506953 logs.go:123] Gathering logs for kube-controller-manager [ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69] ...
	I0828 18:43:59.795702  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69"
	I0828 18:43:59.895079  506953 logs.go:123] Gathering logs for kube-controller-manager [b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1] ...
	I0828 18:43:59.895118  506953 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1"
	I0828 18:43:59.989951  506953 out.go:358] Setting ErrFile to fd 2...
	I0828 18:43:59.989985  506953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 18:43:59.990051  506953 out.go:270] X Problems detected in kubelet:
	W0828 18:43:59.990065  506953 out.go:270]   Aug 28 18:43:27 old-k8s-version-807226 kubelet[660]: E0828 18:43:27.701547     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:59.990077  506953 out.go:270]   Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.701771     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:59.990090  506953 out.go:270]   Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.702673     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0828 18:43:59.990095  506953 out.go:270]   Aug 28 18:43:50 old-k8s-version-807226 kubelet[660]: E0828 18:43:50.701089     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	W0828 18:43:59.990222  506953 out.go:270]   Aug 28 18:43:52 old-k8s-version-807226 kubelet[660]: E0828 18:43:52.717096     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	I0828 18:43:59.990246  506953 out.go:358] Setting ErrFile to fd 2...
	I0828 18:43:59.990256  506953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:43:57.962751  517188 out.go:235]   - Generating certificates and keys ...
	I0828 18:43:57.962912  517188 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:43:57.962999  517188 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:43:58.334611  517188 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 18:43:58.646254  517188 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 18:43:59.588162  517188 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 18:44:00.271010  517188 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 18:44:01.012055  517188 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 18:44:01.012416  517188 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-014747 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0828 18:44:01.408785  517188 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 18:44:01.409106  517188 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-014747 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0828 18:44:01.879474  517188 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 18:44:02.281596  517188 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 18:44:03.192308  517188 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 18:44:03.192654  517188 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:44:03.655463  517188 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:44:04.105417  517188 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 18:44:04.891653  517188 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:44:05.660304  517188 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:44:05.908334  517188 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:44:05.909178  517188 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:44:05.915701  517188 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:44:05.918426  517188 out.go:235]   - Booting up control plane ...
	I0828 18:44:05.918574  517188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:44:05.918671  517188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:44:05.920147  517188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:44:05.934034  517188 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:44:05.940668  517188 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:44:05.940734  517188 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:44:06.056171  517188 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 18:44:06.056299  517188 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 18:44:07.056288  517188 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001752082s
	I0828 18:44:07.056377  517188 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 18:44:09.991483  506953 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0828 18:44:10.007586  506953 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0828 18:44:10.010464  506953 out.go:201] 
	W0828 18:44:10.013284  506953 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0828 18:44:10.013326  506953 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0828 18:44:10.013354  506953 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0828 18:44:10.013361  506953 out.go:270] * 
	W0828 18:44:10.014310  506953 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:44:10.017397  506953 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	da6c90b288324       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   1988fbaed2bed       dashboard-metrics-scraper-8d5bb5db8-pqlrf
	16643aefd7e5a       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   3c270af4bddbf       storage-provisioner
	3ec78b2440734       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   253338198b0f5       kubernetes-dashboard-cd95d586-8sp92
	39c00f00889f2       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   a491baf4bcf8f       kube-proxy-jqkn2
	1c17a50a955c3       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   043eccbb67088       coredns-74ff55c5b-2pk8p
	390c7ff03ee9f       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   dfde861f0fd4e       busybox
	f76caec21f5d8       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   68cb3be99feb0       kindnet-cq7cs
	b403b03ba5082       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   3c270af4bddbf       storage-provisioner
	d12142543e366       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   6bc52b8385b5a       kube-scheduler-old-k8s-version-807226
	ed51240341691       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   d92efcf2886cf       kube-controller-manager-old-k8s-version-807226
	ecb3703d64384       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   b42b5b393f8b3       kube-apiserver-old-k8s-version-807226
	2bde87bfd1667       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   be6d8b6f79c51       etcd-old-k8s-version-807226
	e0b0498e6b866       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   2f8c7cd944dc4       busybox
	64c9a7288d98a       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   f226c6ad762b1       coredns-74ff55c5b-2pk8p
	e1e5afdba81b9       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   8e0ac434f2fc4       kindnet-cq7cs
	1dbad4a76fdde       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   a008aa81ef3f1       kube-proxy-jqkn2
	b3adda8eb7c3b       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   e13ed25b0cb2c       kube-controller-manager-old-k8s-version-807226
	a8f32bf4ba5d9       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   e71e1faf0ec49       kube-apiserver-old-k8s-version-807226
	e10b5ef611854       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   728e71dc41365       kube-scheduler-old-k8s-version-807226
	24b8ed82576ea       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   78dac09ff3bf0       etcd-old-k8s-version-807226
	
	
	==> containerd <==
	Aug 28 18:40:08 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:40:08.731789376Z" level=info msg="CreateContainer within sandbox \"1988fbaed2beda1028459083f0ddbfb477a525d3e018adc5505ca4c8e7007aec\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"1f424bb121e0eb7421c22a1cb51a9941a66b8845385a7708ce8e58b9e8612903\""
	Aug 28 18:40:08 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:40:08.732907476Z" level=info msg="StartContainer for \"1f424bb121e0eb7421c22a1cb51a9941a66b8845385a7708ce8e58b9e8612903\""
	Aug 28 18:40:08 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:40:08.819075452Z" level=info msg="StartContainer for \"1f424bb121e0eb7421c22a1cb51a9941a66b8845385a7708ce8e58b9e8612903\" returns successfully"
	Aug 28 18:40:08 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:40:08.852727403Z" level=info msg="shim disconnected" id=1f424bb121e0eb7421c22a1cb51a9941a66b8845385a7708ce8e58b9e8612903 namespace=k8s.io
	Aug 28 18:40:08 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:40:08.852786414Z" level=warning msg="cleaning up after shim disconnected" id=1f424bb121e0eb7421c22a1cb51a9941a66b8845385a7708ce8e58b9e8612903 namespace=k8s.io
	Aug 28 18:40:08 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:40:08.852799427Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 28 18:40:09 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:40:09.270865267Z" level=info msg="RemoveContainer for \"6e812968461640ee64136cedb659da12ca429757ff74e916e31ad34b46ebffb4\""
	Aug 28 18:40:09 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:40:09.278012371Z" level=info msg="RemoveContainer for \"6e812968461640ee64136cedb659da12ca429757ff74e916e31ad34b46ebffb4\" returns successfully"
	Aug 28 18:41:08 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:08.702000429Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 28 18:41:08 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:08.708098660Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 28 18:41:08 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:08.713036518Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 28 18:41:08 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:08.713080801Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 28 18:41:31 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:31.703289419Z" level=info msg="CreateContainer within sandbox \"1988fbaed2beda1028459083f0ddbfb477a525d3e018adc5505ca4c8e7007aec\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Aug 28 18:41:31 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:31.719985300Z" level=info msg="CreateContainer within sandbox \"1988fbaed2beda1028459083f0ddbfb477a525d3e018adc5505ca4c8e7007aec\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d\""
	Aug 28 18:41:31 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:31.720696452Z" level=info msg="StartContainer for \"da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d\""
	Aug 28 18:41:31 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:31.785746216Z" level=info msg="StartContainer for \"da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d\" returns successfully"
	Aug 28 18:41:31 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:31.808071973Z" level=info msg="shim disconnected" id=da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d namespace=k8s.io
	Aug 28 18:41:31 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:31.808132116Z" level=warning msg="cleaning up after shim disconnected" id=da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d namespace=k8s.io
	Aug 28 18:41:31 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:31.808148001Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 28 18:41:32 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:32.527949722Z" level=info msg="RemoveContainer for \"1f424bb121e0eb7421c22a1cb51a9941a66b8845385a7708ce8e58b9e8612903\""
	Aug 28 18:41:32 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:41:32.533509781Z" level=info msg="RemoveContainer for \"1f424bb121e0eb7421c22a1cb51a9941a66b8845385a7708ce8e58b9e8612903\" returns successfully"
	Aug 28 18:43:52 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:43:52.703555338Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 28 18:43:52 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:43:52.713448787Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 28 18:43:52 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:43:52.715502333Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 28 18:43:52 old-k8s-version-807226 containerd[567]: time="2024-08-28T18:43:52.715797233Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [1c17a50a955c3be41807339455f7aab56b85bdf62e26b509e130baed8cff9233] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:47484 - 45753 "HINFO IN 1327439505057511823.7301866805424390752. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010797417s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0828 18:38:49.139157       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-28 18:38:19.138575258 +0000 UTC m=+0.030316470) (total time: 30.000470667s):
	Trace[2019727887]: [30.000470667s] [30.000470667s] END
	E0828 18:38:49.139191       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0828 18:38:49.139580       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-28 18:38:19.139323045 +0000 UTC m=+0.031064257) (total time: 30.000242427s):
	Trace[939984059]: [30.000242427s] [30.000242427s] END
	E0828 18:38:49.139597       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0828 18:38:49.139657       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-28 18:38:19.13929107 +0000 UTC m=+0.031032282) (total time: 30.000298393s):
	Trace[911902081]: [30.000298393s] [30.000298393s] END
	E0828 18:38:49.139682       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [64c9a7288d98a19615583dc145f4c18e5c1fe89beb8114416eb7999434f725d1] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:37451 - 49644 "HINFO IN 5620879983300694207.51349192996491991. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.053957157s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-807226
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-807226
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=old-k8s-version-807226
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T18_35_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 18:35:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-807226
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 18:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 18:39:17 +0000   Wed, 28 Aug 2024 18:35:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 18:39:17 +0000   Wed, 28 Aug 2024 18:35:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 18:39:17 +0000   Wed, 28 Aug 2024 18:35:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 18:39:17 +0000   Wed, 28 Aug 2024 18:35:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-807226
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 16ec34867f304efb91beedbcf2f3a07a
	  System UUID:                52472be6-64ce-4b48-8ac9-01fc5d91dbc7
	  Boot ID:                    d0152fd0-4c93-4332-a156-fea49619c341
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 coredns-74ff55c5b-2pk8p                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m19s
	  kube-system                 etcd-old-k8s-version-807226                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m27s
	  kube-system                 kindnet-cq7cs                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m19s
	  kube-system                 kube-apiserver-old-k8s-version-807226             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-controller-manager-old-k8s-version-807226    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-proxy-jqkn2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-scheduler-old-k8s-version-807226             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 metrics-server-9975d5f86-6vl9g                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m33s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-pqlrf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-8sp92               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m47s (x4 over 8m47s)  kubelet     Node old-k8s-version-807226 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m47s (x5 over 8m47s)  kubelet     Node old-k8s-version-807226 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m47s (x4 over 8m47s)  kubelet     Node old-k8s-version-807226 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m28s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m28s                  kubelet     Node old-k8s-version-807226 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s                  kubelet     Node old-k8s-version-807226 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m28s                  kubelet     Node old-k8s-version-807226 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m19s                  kubelet     Node old-k8s-version-807226 status is now: NodeReady
	  Normal  Starting                 8m18s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m5s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m5s (x8 over 6m5s)    kubelet     Node old-k8s-version-807226 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x8 over 6m5s)    kubelet     Node old-k8s-version-807226 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x7 over 6m5s)    kubelet     Node old-k8s-version-807226 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m52s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Aug28 17:17] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [24b8ed82576ea53ec4f8ce85379a5220fdda051fabb62136949cc6fd84cf46ec] <==
	raft2024/08/28 18:35:26 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/08/28 18:35:26 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/08/28 18:35:26 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/08/28 18:35:26 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/08/28 18:35:26 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-08-28 18:35:26.944266 I | etcdserver: published {Name:old-k8s-version-807226 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-08-28 18:35:26.944507 I | embed: ready to serve client requests
	2024-08-28 18:35:26.946466 I | embed: serving client requests on 192.168.85.2:2379
	2024-08-28 18:35:26.948401 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-28 18:35:26.949390 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-28 18:35:26.951408 I | embed: ready to serve client requests
	2024-08-28 18:35:26.951584 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-28 18:35:26.952723 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-28 18:35:49.861427 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:35:54.305670 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:36:04.305318 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:36:14.305479 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:36:24.305305 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:36:34.305416 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:36:44.305350 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:36:54.305716 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:37:04.305383 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:37:14.305318 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:37:24.305718 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:37:34.305262 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [2bde87bfd1667f8e50dff844931e074eee782fc244ca2c70878e1a048c6a1127] <==
	2024-08-28 18:40:22.364904 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:40:32.364941 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:40:42.365368 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:40:52.365048 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:41:02.364918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:41:12.364975 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:41:22.365010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:41:32.364961 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:41:42.365015 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:41:52.365110 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:42:02.364842 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:42:12.364988 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:42:22.365150 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:42:32.364877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:42:42.365002 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:42:52.365035 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:43:02.364955 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:43:12.365007 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:43:22.365008 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:43:32.365016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:43:42.365077 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:43:47.236288 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" " with result "range_response_count:123 size:98142" took too long (116.057196ms) to execute
	2024-08-28 18:43:52.365015 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:44:02.365452 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-28 18:44:12.365008 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 18:44:12 up  2:26,  0 users,  load average: 2.14, 1.83, 2.33
	Linux old-k8s-version-807226 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e1e5afdba81b9617f222a35e528dc3756b8e74c14f503f0017f60ea7c1b6e41b] <==
	I0828 18:35:57.120223       1 controller.go:338] Waiting for informer caches to sync
	I0828 18:35:57.120230       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0828 18:35:57.320390       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0828 18:35:57.320440       1 metrics.go:61] Registering metrics
	I0828 18:35:57.320510       1 controller.go:374] Syncing nftables rules
	I0828 18:36:07.120027       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:36:07.120092       1 main.go:299] handling current node
	I0828 18:36:17.120060       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:36:17.120099       1 main.go:299] handling current node
	I0828 18:36:27.124148       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:36:27.124191       1 main.go:299] handling current node
	I0828 18:36:37.128413       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:36:37.128448       1 main.go:299] handling current node
	I0828 18:36:47.125250       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:36:47.125287       1 main.go:299] handling current node
	I0828 18:36:57.120979       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:36:57.121030       1 main.go:299] handling current node
	I0828 18:37:07.124764       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:37:07.124805       1 main.go:299] handling current node
	I0828 18:37:17.120025       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:37:17.120084       1 main.go:299] handling current node
	I0828 18:37:27.120648       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:37:27.120745       1 main.go:299] handling current node
	I0828 18:37:37.125824       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:37:37.125863       1 main.go:299] handling current node
	
	
	==> kindnet [f76caec21f5d87bbfe858eb5f86a93b5dc89f41401af685c275fa8a2c8443d0b] <==
	I0828 18:42:09.429719       1 main.go:299] handling current node
	I0828 18:42:19.420380       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:42:19.420442       1 main.go:299] handling current node
	I0828 18:42:29.427783       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:42:29.427815       1 main.go:299] handling current node
	I0828 18:42:39.431485       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:42:39.431606       1 main.go:299] handling current node
	I0828 18:42:49.427358       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:42:49.427420       1 main.go:299] handling current node
	I0828 18:42:59.426776       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:42:59.426810       1 main.go:299] handling current node
	I0828 18:43:09.428659       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:43:09.428694       1 main.go:299] handling current node
	I0828 18:43:19.420407       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:43:19.420444       1 main.go:299] handling current node
	I0828 18:43:29.424901       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:43:29.424939       1 main.go:299] handling current node
	I0828 18:43:39.432330       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:43:39.432420       1 main.go:299] handling current node
	I0828 18:43:49.428458       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:43:49.428502       1 main.go:299] handling current node
	I0828 18:43:59.426835       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:43:59.426868       1 main.go:299] handling current node
	I0828 18:44:09.420114       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0828 18:44:09.420223       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a8f32bf4ba5d99b8f45f5175a6f2c38348d3beb6ff968520f25e0e37cbe28ee3] <==
	I0828 18:35:34.303681       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0828 18:35:34.303707       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0828 18:35:34.311575       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0828 18:35:34.316015       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0828 18:35:34.316040       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0828 18:35:34.769820       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0828 18:35:34.807720       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0828 18:35:34.918168       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0828 18:35:34.919522       1 controller.go:606] quota admission added evaluator for: endpoints
	I0828 18:35:34.923630       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0828 18:35:36.021007       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0828 18:35:36.390424       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0828 18:35:36.498399       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0828 18:35:44.834862       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0828 18:35:53.569232       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0828 18:35:53.592299       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0828 18:36:05.836630       1 client.go:360] parsed scheme: "passthrough"
	I0828 18:36:05.836679       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0828 18:36:05.836689       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0828 18:36:42.785736       1 client.go:360] parsed scheme: "passthrough"
	I0828 18:36:42.785779       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0828 18:36:42.785788       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0828 18:37:20.014511       1 client.go:360] parsed scheme: "passthrough"
	I0828 18:37:20.014576       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0828 18:37:20.014586       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [ecb3703d6438446d917c59591d498e4918f8adf0b782ca34c22308ec87741d75] <==
	I0828 18:40:44.907356       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0828 18:40:44.907365       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0828 18:41:18.681684       1 handler_proxy.go:102] no RequestInfo found in the context
	E0828 18:41:18.681942       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0828 18:41:18.681961       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:41:24.929389       1 client.go:360] parsed scheme: "passthrough"
	I0828 18:41:24.929450       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0828 18:41:24.929458       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0828 18:41:56.746353       1 client.go:360] parsed scheme: "passthrough"
	I0828 18:41:56.746403       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0828 18:41:56.746553       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0828 18:42:36.550544       1 client.go:360] parsed scheme: "passthrough"
	I0828 18:42:36.550590       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0828 18:42:36.550599       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0828 18:43:07.296257       1 client.go:360] parsed scheme: "passthrough"
	I0828 18:43:07.296303       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0828 18:43:07.296313       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0828 18:43:17.437686       1 handler_proxy.go:102] no RequestInfo found in the context
	E0828 18:43:17.437891       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0828 18:43:17.437907       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:43:42.831916       1 client.go:360] parsed scheme: "passthrough"
	I0828 18:43:42.832162       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0828 18:43:42.832281       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [b3adda8eb7c3b2dd6ec104ed8d15215991933824ae321c1b57c97847cd673ee1] <==
	I0828 18:35:53.509709       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0828 18:35:53.511212       1 event.go:291] "Event occurred" object="old-k8s-version-807226" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-807226 event: Registered Node old-k8s-version-807226 in Controller"
	I0828 18:35:53.545337       1 shared_informer.go:247] Caches are synced for endpoint 
	I0828 18:35:53.545543       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0828 18:35:53.545685       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0828 18:35:53.545781       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0828 18:35:53.557919       1 shared_informer.go:247] Caches are synced for stateful set 
	I0828 18:35:53.563110       1 shared_informer.go:247] Caches are synced for attach detach 
	I0828 18:35:53.570691       1 shared_informer.go:247] Caches are synced for deployment 
	I0828 18:35:53.571294       1 shared_informer.go:247] Caches are synced for resource quota 
	I0828 18:35:53.592429       1 shared_informer.go:247] Caches are synced for resource quota 
	I0828 18:35:53.628807       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jqkn2"
	I0828 18:35:53.662160       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cq7cs"
	I0828 18:35:53.650989       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0828 18:35:53.835607       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0828 18:35:53.846027       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vt5fj"
	I0828 18:35:53.923433       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-2pk8p"
	E0828 18:35:53.964025       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"89504ede-0e37-429f-b2f0-1bfb60e35890", ResourceVersion:"242", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63860466936, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40019da860), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40019da880)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x40019da8a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40018ff240), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019da
8c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019da8e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40019da920)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400152cfc0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000e26ca8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40005e1570), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000307a68)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000e26d08)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0828 18:35:53.989807       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0828 18:35:53.989864       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0828 18:35:53.989835       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"d0803806-24df-4953-91f9-9196a5333efa", ResourceVersion:"250", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63860466937, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40019da980), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40019da9a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40019da9c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019da9e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019daa00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019daa20), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40019daa40)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40019daa80)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400152d020), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000e26f58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40005e1650), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000307a70)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000e26fa0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0828 18:35:54.037198       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0828 18:35:54.965414       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0828 18:35:54.977301       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-vt5fj"
	I0828 18:37:38.384941       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-controller-manager [ed512403416916921f6eeb8c28c6f140dc1c21179e4a71007da06ab5702fcf69] <==
	E0828 18:40:05.757150       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0828 18:40:11.110164       1 request.go:655] Throttling request took 1.047875426s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0828 18:40:11.961744       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0828 18:40:36.262519       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0828 18:40:43.612193       1 request.go:655] Throttling request took 1.048565336s, request: GET:https://192.168.85.2:8443/apis/policy/v1beta1?timeout=32s
	W0828 18:40:44.463569       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0828 18:41:06.764347       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0828 18:41:16.114193       1 request.go:655] Throttling request took 1.048175535s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0828 18:41:16.965535       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0828 18:41:37.266341       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0828 18:41:48.615974       1 request.go:655] Throttling request took 1.048386493s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0828 18:41:49.467368       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0828 18:42:07.767601       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0828 18:42:21.117735       1 request.go:655] Throttling request took 1.048055312s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0828 18:42:21.969350       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0828 18:42:38.269050       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0828 18:42:53.619745       1 request.go:655] Throttling request took 1.048427192s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0828 18:42:54.472034       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0828 18:43:08.771061       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0828 18:43:26.122596       1 request.go:655] Throttling request took 1.048257057s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0828 18:43:26.974221       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0828 18:43:39.320125       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0828 18:43:58.624587       1 request.go:655] Throttling request took 1.047740247s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0828 18:43:59.476400       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0828 18:44:09.821796       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [1dbad4a76fdde250ef6fa39fee85f60ddc5aa2a1b8c3bcb7314097b9936d5cb6] <==
	I0828 18:35:54.808959       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0828 18:35:54.809054       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0828 18:35:54.833666       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0828 18:35:54.833775       1 server_others.go:185] Using iptables Proxier.
	I0828 18:35:54.834007       1 server.go:650] Version: v1.20.0
	I0828 18:35:54.835156       1 config.go:315] Starting service config controller
	I0828 18:35:54.835172       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0828 18:35:54.835191       1 config.go:224] Starting endpoint slice config controller
	I0828 18:35:54.835195       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0828 18:35:54.935260       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0828 18:35:54.935321       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [39c00f00889f25d7e3a92a4745099d6d42027030d9079cda43c329e6074590d7] <==
	I0828 18:38:20.354353       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0828 18:38:20.354626       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0828 18:38:20.372847       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0828 18:38:20.372949       1 server_others.go:185] Using iptables Proxier.
	I0828 18:38:20.373226       1 server.go:650] Version: v1.20.0
	I0828 18:38:20.373917       1 config.go:315] Starting service config controller
	I0828 18:38:20.376301       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0828 18:38:20.376180       1 config.go:224] Starting endpoint slice config controller
	I0828 18:38:20.376351       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0828 18:38:20.476468       1 shared_informer.go:247] Caches are synced for service config 
	I0828 18:38:20.476680       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [d12142543e36680809d6caebbe06a06785675f9248b5dd6343974fc994a51ee4] <==
	I0828 18:38:11.884850       1 serving.go:331] Generated self-signed cert in-memory
	W0828 18:38:15.977254       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 18:38:15.977300       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 18:38:15.977315       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 18:38:15.977321       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 18:38:16.309338       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0828 18:38:16.309807       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 18:38:16.309816       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 18:38:16.309829       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0828 18:38:16.328500       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 18:38:16.328596       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0828 18:38:16.329990       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 18:38:16.330211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0828 18:38:16.330296       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 18:38:16.330349       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 18:38:16.330410       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 18:38:16.330457       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0828 18:38:16.330511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 18:38:16.330563       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 18:38:16.330620       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0828 18:38:16.330668       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0828 18:38:17.911607       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [e10b5ef611854758c4cf248564c5fa843b706c2bf9353f7be7cd6005660988e6] <==
	W0828 18:35:33.530853       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 18:35:33.530898       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 18:35:33.530910       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 18:35:33.530916       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 18:35:33.584558       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0828 18:35:33.584668       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 18:35:33.584678       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 18:35:33.584696       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0828 18:35:33.590792       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 18:35:33.590885       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 18:35:33.590953       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 18:35:33.591015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0828 18:35:33.591083       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0828 18:35:33.591156       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0828 18:35:33.591268       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 18:35:33.591397       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0828 18:35:33.591553       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 18:35:33.591675       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0828 18:35:33.591878       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 18:35:33.592202       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 18:35:34.441350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0828 18:35:34.465391       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 18:35:34.519649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 18:35:34.633859       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0828 18:35:35.184780       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Aug 28 18:42:38 old-k8s-version-807226 kubelet[660]: E0828 18:42:38.701282     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	Aug 28 18:42:49 old-k8s-version-807226 kubelet[660]: E0828 18:42:49.701288     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 28 18:42:49 old-k8s-version-807226 kubelet[660]: I0828 18:42:49.702023     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d
	Aug 28 18:42:49 old-k8s-version-807226 kubelet[660]: E0828 18:42:49.702400     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	Aug 28 18:43:02 old-k8s-version-807226 kubelet[660]: I0828 18:43:02.700619     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d
	Aug 28 18:43:02 old-k8s-version-807226 kubelet[660]: E0828 18:43:02.701048     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	Aug 28 18:43:04 old-k8s-version-807226 kubelet[660]: E0828 18:43:04.701421     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 28 18:43:13 old-k8s-version-807226 kubelet[660]: I0828 18:43:13.700777     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d
	Aug 28 18:43:13 old-k8s-version-807226 kubelet[660]: E0828 18:43:13.704867     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	Aug 28 18:43:15 old-k8s-version-807226 kubelet[660]: E0828 18:43:15.701573     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 28 18:43:24 old-k8s-version-807226 kubelet[660]: I0828 18:43:24.700557     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d
	Aug 28 18:43:24 old-k8s-version-807226 kubelet[660]: E0828 18:43:24.700926     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	Aug 28 18:43:27 old-k8s-version-807226 kubelet[660]: E0828 18:43:27.701547     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: I0828 18:43:38.701001     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d
	Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.701771     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	Aug 28 18:43:38 old-k8s-version-807226 kubelet[660]: E0828 18:43:38.702673     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 28 18:43:50 old-k8s-version-807226 kubelet[660]: I0828 18:43:50.700757     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d
	Aug 28 18:43:50 old-k8s-version-807226 kubelet[660]: E0828 18:43:50.701089     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	Aug 28 18:43:52 old-k8s-version-807226 kubelet[660]: E0828 18:43:52.716043     660 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 28 18:43:52 old-k8s-version-807226 kubelet[660]: E0828 18:43:52.716531     660 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 28 18:43:52 old-k8s-version-807226 kubelet[660]: E0828 18:43:52.716881     660 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-6hcmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b
4-496d-4056-8e3a-ed3392131fa9): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 28 18:43:52 old-k8s-version-807226 kubelet[660]: E0828 18:43:52.717096     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 28 18:44:03 old-k8s-version-807226 kubelet[660]: I0828 18:44:03.700834     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: da6c90b288324c8c5e4fe830d53cd4f3badd3b261a926712dee3929f6dfca18d
	Aug 28 18:44:03 old-k8s-version-807226 kubelet[660]: E0828 18:44:03.701971     660 pod_workers.go:191] Error syncing pod 237838e6-e5d7-4770-a37e-0bb6993e575b ("dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pqlrf_kubernetes-dashboard(237838e6-e5d7-4770-a37e-0bb6993e575b)"
	Aug 28 18:44:06 old-k8s-version-807226 kubelet[660]: E0828 18:44:06.701921     660 pod_workers.go:191] Error syncing pod 7f8dd7b4-496d-4056-8e3a-ed3392131fa9 ("metrics-server-9975d5f86-6vl9g_kube-system(7f8dd7b4-496d-4056-8e3a-ed3392131fa9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [3ec78b24407346e20914b1c71c700c8794c0ecc6defc1da45fc43621c50a0dd5] <==
	2024/08/28 18:38:38 Using namespace: kubernetes-dashboard
	2024/08/28 18:38:38 Using in-cluster config to connect to apiserver
	2024/08/28 18:38:38 Using secret token for csrf signing
	2024/08/28 18:38:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/28 18:38:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/28 18:38:38 Successful initial request to the apiserver, version: v1.20.0
	2024/08/28 18:38:38 Generating JWE encryption key
	2024/08/28 18:38:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/28 18:38:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/28 18:38:38 Initializing JWE encryption key from synchronized object
	2024/08/28 18:38:38 Creating in-cluster Sidecar client
	2024/08/28 18:38:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:38:38 Serving insecurely on HTTP port: 9090
	2024/08/28 18:39:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:39:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:40:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:40:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:41:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:41:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:42:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:42:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:43:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:43:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:44:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/28 18:38:38 Starting overwatch
	
	
	==> storage-provisioner [16643aefd7e5a512848e1bcece377cd38dda4b18ebb19c909ce665340523c5d9] <==
	I0828 18:39:01.987966       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 18:39:02.013271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 18:39:02.013498       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 18:39:19.507752       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 18:39:19.524396       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb0ab573-67cb-4c34-9130-8b5c50efa8f2", APIVersion:"v1", ResourceVersion:"840", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-807226_6f3b4e4f-cdb8-4800-8fc0-bd725950577d became leader
	I0828 18:39:19.540405       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-807226_6f3b4e4f-cdb8-4800-8fc0-bd725950577d!
	I0828 18:39:19.640792       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-807226_6f3b4e4f-cdb8-4800-8fc0-bd725950577d!
	
	
	==> storage-provisioner [b403b03ba50820ad3029da05be19aca5fe7f7845be195379912de47fd558d6da] <==
	I0828 18:38:18.144029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0828 18:38:48.146201       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-807226 -n old-k8s-version-807226
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-807226 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-6vl9g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-807226 describe pod metrics-server-9975d5f86-6vl9g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-807226 describe pod metrics-server-9975d5f86-6vl9g: exit status 1 (110.110211ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-6vl9g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-807226 describe pod metrics-server-9975d5f86-6vl9g: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (383.38s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.54
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 7.43
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 220.11
31 TestAddons/serial/GCPAuth/Namespaces 0.17
33 TestAddons/parallel/Registry 16.42
34 TestAddons/parallel/Ingress 20.41
35 TestAddons/parallel/InspektorGadget 12.06
36 TestAddons/parallel/MetricsServer 5.85
39 TestAddons/parallel/CSI 54.01
40 TestAddons/parallel/Headlamp 17.96
41 TestAddons/parallel/CloudSpanner 5.6
42 TestAddons/parallel/LocalPath 52.09
43 TestAddons/parallel/NvidiaDevicePlugin 5.76
44 TestAddons/parallel/Yakd 11.83
45 TestAddons/StoppedEnableDisable 12.31
46 TestCertOptions 32.96
47 TestCertExpiration 227.13
49 TestForceSystemdFlag 42.5
50 TestForceSystemdEnv 43.26
51 TestDockerEnvContainerd 49.25
56 TestErrorSpam/setup 29.91
57 TestErrorSpam/start 1.03
58 TestErrorSpam/status 1.07
59 TestErrorSpam/pause 1.77
60 TestErrorSpam/unpause 2.16
61 TestErrorSpam/stop 1.46
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 51.8
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.26
73 TestFunctional/serial/CacheCmd/cache/add_local 1.3
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.07
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 43.83
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.67
84 TestFunctional/serial/LogsFileCmd 1.7
85 TestFunctional/serial/InvalidService 4.75
87 TestFunctional/parallel/ConfigCmd 0.53
88 TestFunctional/parallel/DashboardCmd 7.92
89 TestFunctional/parallel/DryRun 0.49
90 TestFunctional/parallel/InternationalLanguage 0.27
91 TestFunctional/parallel/StatusCmd 1.06
95 TestFunctional/parallel/ServiceCmdConnect 6.73
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 24.83
99 TestFunctional/parallel/SSHCmd 0.51
100 TestFunctional/parallel/CpCmd 1.97
102 TestFunctional/parallel/FileSync 0.32
103 TestFunctional/parallel/CertSync 2.17
107 TestFunctional/parallel/NodeLabels 0.1
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
111 TestFunctional/parallel/License 0.32
112 TestFunctional/parallel/Version/short 0.11
113 TestFunctional/parallel/Version/components 1.33
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.04
119 TestFunctional/parallel/ImageCommands/Setup 0.9
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.48
125 TestFunctional/parallel/ServiceCmd/DeployApp 9.27
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.37
136 TestFunctional/parallel/ServiceCmd/List 0.35
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
139 TestFunctional/parallel/ServiceCmd/Format 0.38
140 TestFunctional/parallel/ServiceCmd/URL 0.34
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
148 TestFunctional/parallel/ProfileCmd/profile_list 0.39
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
150 TestFunctional/parallel/MountCmd/any-port 8.1
151 TestFunctional/parallel/MountCmd/specific-port 1.87
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.91
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 112.38
160 TestMultiControlPlane/serial/DeployApp 29.92
161 TestMultiControlPlane/serial/PingHostFromPods 1.62
162 TestMultiControlPlane/serial/AddWorkerNode 23.74
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.77
165 TestMultiControlPlane/serial/CopyFile 19.08
166 TestMultiControlPlane/serial/StopSecondaryNode 12.85
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.55
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.54
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.78
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 141.06
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.57
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
173 TestMultiControlPlane/serial/StopCluster 36.09
174 TestMultiControlPlane/serial/RestartCluster 68.8
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
176 TestMultiControlPlane/serial/AddSecondaryNode 38.05
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
181 TestJSONOutput/start/Command 53
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.76
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.64
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.77
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 41.68
207 TestKicCustomNetwork/use_default_bridge_network 32.9
208 TestKicExistingNetwork 34.41
209 TestKicCustomSubnet 36.04
210 TestKicStaticIP 32.23
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 68.11
215 TestMountStart/serial/StartWithMountFirst 6.4
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 6.61
218 TestMountStart/serial/VerifyMountSecond 0.3
219 TestMountStart/serial/DeleteFirst 1.6
220 TestMountStart/serial/VerifyMountPostDelete 0.28
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.54
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 69.1
227 TestMultiNode/serial/DeployApp2Nodes 17.2
228 TestMultiNode/serial/PingHostFrom2Pods 1
229 TestMultiNode/serial/AddNode 16.48
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.32
232 TestMultiNode/serial/CopyFile 9.95
233 TestMultiNode/serial/StopNode 2.3
234 TestMultiNode/serial/StartAfterStop 9.63
235 TestMultiNode/serial/RestartKeepsNodes 86.66
236 TestMultiNode/serial/DeleteNode 5.99
237 TestMultiNode/serial/StopMultiNode 24.03
238 TestMultiNode/serial/RestartMultiNode 50.04
239 TestMultiNode/serial/ValidateNameConflict 33.95
244 TestPreload 122.31
246 TestScheduledStopUnix 106.45
249 TestInsufficientStorage 13.22
250 TestRunningBinaryUpgrade 88.08
252 TestKubernetesUpgrade 349.05
253 TestMissingContainerUpgrade 175.55
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
256 TestNoKubernetes/serial/StartWithK8s 36.85
257 TestNoKubernetes/serial/StartWithStopK8s 19.71
258 TestNoKubernetes/serial/Start 6.69
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
260 TestNoKubernetes/serial/ProfileList 0.93
261 TestNoKubernetes/serial/Stop 1.42
262 TestNoKubernetes/serial/StartNoArgs 6.51
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
264 TestStoppedBinaryUpgrade/Setup 0.9
265 TestStoppedBinaryUpgrade/Upgrade 107.35
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
275 TestPause/serial/Start 54.26
276 TestPause/serial/SecondStartNoReconfiguration 8.23
277 TestPause/serial/Pause 0.91
278 TestPause/serial/VerifyStatus 0.42
279 TestPause/serial/Unpause 0.83
280 TestPause/serial/PauseAgain 1.08
281 TestPause/serial/DeletePaused 2.81
282 TestPause/serial/VerifyDeletedResources 0.45
290 TestNetworkPlugins/group/false 5.2
295 TestStartStop/group/old-k8s-version/serial/FirstStart 155.54
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.74
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.99
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.47
300 TestStartStop/group/old-k8s-version/serial/Stop 12.37
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.35
303 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.46
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.23
305 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.82
308 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
311 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.14
313 TestStartStop/group/embed-certs/serial/FirstStart 65.9
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
317 TestStartStop/group/old-k8s-version/serial/Pause 2.92
319 TestStartStop/group/no-preload/serial/FirstStart 61.34
320 TestStartStop/group/embed-certs/serial/DeployApp 9.44
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.49
322 TestStartStop/group/embed-certs/serial/Stop 12.69
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
324 TestStartStop/group/embed-certs/serial/SecondStart 303.98
325 TestStartStop/group/no-preload/serial/DeployApp 9.38
326 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
327 TestStartStop/group/no-preload/serial/Stop 12.07
328 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
329 TestStartStop/group/no-preload/serial/SecondStart 289.58
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
333 TestStartStop/group/embed-certs/serial/Pause 3.08
335 TestStartStop/group/newest-cni/serial/FirstStart 38.03
336 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.17
338 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
339 TestStartStop/group/no-preload/serial/Pause 4.18
340 TestNetworkPlugins/group/auto/Start 74.25
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.43
343 TestStartStop/group/newest-cni/serial/Stop 1.28
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
345 TestStartStop/group/newest-cni/serial/SecondStart 23.42
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
349 TestStartStop/group/newest-cni/serial/Pause 4.83
350 TestNetworkPlugins/group/kindnet/Start 67.03
351 TestNetworkPlugins/group/auto/KubeletFlags 0.34
352 TestNetworkPlugins/group/auto/NetCatPod 10.34
353 TestNetworkPlugins/group/auto/DNS 0.26
354 TestNetworkPlugins/group/auto/Localhost 0.17
355 TestNetworkPlugins/group/auto/HairPin 0.15
356 TestNetworkPlugins/group/calico/Start 66.53
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
359 TestNetworkPlugins/group/kindnet/NetCatPod 11.57
360 TestNetworkPlugins/group/kindnet/DNS 0.25
361 TestNetworkPlugins/group/kindnet/Localhost 0.17
362 TestNetworkPlugins/group/kindnet/HairPin 0.18
363 TestNetworkPlugins/group/custom-flannel/Start 57.86
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.37
366 TestNetworkPlugins/group/calico/NetCatPod 11.44
367 TestNetworkPlugins/group/calico/DNS 0.22
368 TestNetworkPlugins/group/calico/Localhost 0.18
369 TestNetworkPlugins/group/calico/HairPin 0.2
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.37
372 TestNetworkPlugins/group/enable-default-cni/Start 77.02
373 TestNetworkPlugins/group/custom-flannel/DNS 0.26
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.29
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
376 TestNetworkPlugins/group/flannel/Start 52.8
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
384 TestNetworkPlugins/group/flannel/NetCatPod 9.26
385 TestNetworkPlugins/group/flannel/DNS 0.24
386 TestNetworkPlugins/group/flannel/Localhost 0.22
387 TestNetworkPlugins/group/flannel/HairPin 0.22
388 TestNetworkPlugins/group/bridge/Start 78.24
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
390 TestNetworkPlugins/group/bridge/NetCatPod 10.26
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.14
393 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (8.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-567694 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-567694 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.54446151s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-567694
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-567694: exit status 85 (73.432435ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-567694 | jenkins | v1.33.1 | 28 Aug 24 17:47 UTC |          |
	|         | -p download-only-567694        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 17:47:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 17:47:57.397742  300187 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:47:57.398207  300187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:47:57.398247  300187 out.go:358] Setting ErrFile to fd 2...
	I0828 17:47:57.398270  300187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:47:57.398542  300187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	W0828 17:47:57.398722  300187 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19529-294791/.minikube/config/config.json: open /home/jenkins/minikube-integration/19529-294791/.minikube/config/config.json: no such file or directory
	I0828 17:47:57.399201  300187 out.go:352] Setting JSON to true
	I0828 17:47:57.400104  300187 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5427,"bootTime":1724861851,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0828 17:47:57.400247  300187 start.go:139] virtualization:  
	I0828 17:47:57.403221  300187 out.go:97] [download-only-567694] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0828 17:47:57.403421  300187 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball: no such file or directory
	I0828 17:47:57.403456  300187 notify.go:220] Checking for updates...
	I0828 17:47:57.405506  300187 out.go:169] MINIKUBE_LOCATION=19529
	I0828 17:47:57.407724  300187 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:47:57.409677  300187 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	I0828 17:47:57.411339  300187 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	I0828 17:47:57.413259  300187 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0828 17:47:57.416441  300187 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0828 17:47:57.416735  300187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:47:57.447501  300187 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 17:47:57.447606  300187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 17:47:57.497718  300187 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-28 17:47:57.488480642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 17:47:57.497832  300187 docker.go:307] overlay module found
	I0828 17:47:57.499649  300187 out.go:97] Using the docker driver based on user configuration
	I0828 17:47:57.499676  300187 start.go:297] selected driver: docker
	I0828 17:47:57.499682  300187 start.go:901] validating driver "docker" against <nil>
	I0828 17:47:57.499797  300187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 17:47:57.547580  300187 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-28 17:47:57.538669219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 17:47:57.547747  300187 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 17:47:57.548027  300187 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0828 17:47:57.548168  300187 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 17:47:57.550308  300187 out.go:169] Using Docker driver with root privileges
	I0828 17:47:57.552432  300187 cni.go:84] Creating CNI manager for ""
	I0828 17:47:57.552447  300187 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0828 17:47:57.552459  300187 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0828 17:47:57.552542  300187 start.go:340] cluster config:
	{Name:download-only-567694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-567694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:47:57.554639  300187 out.go:97] Starting "download-only-567694" primary control-plane node in "download-only-567694" cluster
	I0828 17:47:57.554663  300187 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0828 17:47:57.556506  300187 out.go:97] Pulling base image v0.0.44-1724775115-19521 ...
	I0828 17:47:57.556530  300187 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0828 17:47:57.556682  300187 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0828 17:47:57.570750  300187 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 17:47:57.570947  300187 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0828 17:47:57.571060  300187 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 17:47:57.623316  300187 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0828 17:47:57.623345  300187 cache.go:56] Caching tarball of preloaded images
	I0828 17:47:57.623564  300187 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0828 17:47:57.625806  300187 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0828 17:47:57.625856  300187 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0828 17:47:57.826678  300187 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-567694 host does not exist
	  To start a cluster, run: "minikube start -p download-only-567694"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-567694
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-300361 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-300361 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.425831385s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-300361
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-300361: exit status 85 (74.715236ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-567694 | jenkins | v1.33.1 | 28 Aug 24 17:47 UTC |                     |
	|         | -p download-only-567694        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC | 28 Aug 24 17:48 UTC |
	| delete  | -p download-only-567694        | download-only-567694 | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC | 28 Aug 24 17:48 UTC |
	| start   | -o=json --download-only        | download-only-300361 | jenkins | v1.33.1 | 28 Aug 24 17:48 UTC |                     |
	|         | -p download-only-300361        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 17:48:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 17:48:06.361970  300391 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:48:06.363147  300391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:48:06.363193  300391 out.go:358] Setting ErrFile to fd 2...
	I0828 17:48:06.363213  300391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:48:06.363558  300391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	I0828 17:48:06.364079  300391 out.go:352] Setting JSON to true
	I0828 17:48:06.365045  300391 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5436,"bootTime":1724861851,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0828 17:48:06.365141  300391 start.go:139] virtualization:  
	I0828 17:48:06.367731  300391 out.go:97] [download-only-300361] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0828 17:48:06.367981  300391 notify.go:220] Checking for updates...
	I0828 17:48:06.369950  300391 out.go:169] MINIKUBE_LOCATION=19529
	I0828 17:48:06.372029  300391 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:48:06.374286  300391 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	I0828 17:48:06.376129  300391 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	I0828 17:48:06.378423  300391 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0828 17:48:06.383050  300391 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0828 17:48:06.383335  300391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:48:06.413463  300391 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 17:48:06.413589  300391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 17:48:06.473605  300391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-28 17:48:06.464120594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 17:48:06.473715  300391 docker.go:307] overlay module found
	I0828 17:48:06.476215  300391 out.go:97] Using the docker driver based on user configuration
	I0828 17:48:06.476243  300391 start.go:297] selected driver: docker
	I0828 17:48:06.476250  300391 start.go:901] validating driver "docker" against <nil>
	I0828 17:48:06.476373  300391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 17:48:06.541252  300391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-28 17:48:06.531362417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 17:48:06.541425  300391 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 17:48:06.541709  300391 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0828 17:48:06.541871  300391 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 17:48:06.544021  300391 out.go:169] Using Docker driver with root privileges
	I0828 17:48:06.547114  300391 cni.go:84] Creating CNI manager for ""
	I0828 17:48:06.547142  300391 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0828 17:48:06.547155  300391 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0828 17:48:06.547246  300391 start.go:340] cluster config:
	{Name:download-only-300361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-300361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:48:06.550200  300391 out.go:97] Starting "download-only-300361" primary control-plane node in "download-only-300361" cluster
	I0828 17:48:06.550236  300391 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0828 17:48:06.552347  300391 out.go:97] Pulling base image v0.0.44-1724775115-19521 ...
	I0828 17:48:06.552396  300391 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0828 17:48:06.552569  300391 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0828 17:48:06.567743  300391 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 17:48:06.567868  300391 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0828 17:48:06.567890  300391 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0828 17:48:06.567897  300391 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0828 17:48:06.567906  300391 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0828 17:48:06.620858  300391 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0828 17:48:06.620888  300391 cache.go:56] Caching tarball of preloaded images
	I0828 17:48:06.621453  300391 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0828 17:48:06.623256  300391 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0828 17:48:06.623284  300391 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0828 17:48:06.718569  300391 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19529-294791/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-300361 host does not exist
	  To start a cluster, run: "minikube start -p download-only-300361"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-300361
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-394784 --alsologtostderr --binary-mirror http://127.0.0.1:35691 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-394784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-394784
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-606058
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-606058: exit status 85 (64.30256ms)

                                                
                                                
-- stdout --
	* Profile "addons-606058" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-606058"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-606058
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-606058: exit status 85 (66.625382ms)

                                                
                                                
-- stdout --
	* Profile "addons-606058" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-606058"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (220.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-606058 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-606058 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m40.113523645s)
--- PASS: TestAddons/Setup (220.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-606058 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-606058 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.957173ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-qgmt4" [50643f06-10a7-469b-a36a-3c6496036a8b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00889705s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mjx8k" [064f11d9-7ab2-407b-9cef-5c27002ca5e1] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004022464s
addons_test.go:342: (dbg) Run:  kubectl --context addons-606058 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-606058 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-606058 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.479280109s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 ip
2024/08/28 17:55:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-606058 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-606058 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-606058 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bfd61bc0-f0e7-41fa-b634-5d4b750cf5c2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bfd61bc0-f0e7-41fa-b634-5d4b750cf5c2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003384332s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-606058 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-606058 addons disable ingress-dns --alsologtostderr -v=1: (1.70590738s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-606058 addons disable ingress --alsologtostderr -v=1: (7.858000593s)
--- PASS: TestAddons/parallel/Ingress (20.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.06s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-snx4z" [dd2cd5ef-ec5b-4a0b-b3e1-ab7f034d7856] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004088189s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-606058
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-606058: (6.052833202s)
--- PASS: TestAddons/parallel/InspektorGadget (12.06s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.019063ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-6724d" [dfa449a9-0492-4e3c-8e8c-a7325a127ba0] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004463512s
addons_test.go:417: (dbg) Run:  kubectl --context addons-606058 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.841739ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-606058 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-606058 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cedea42e-94c6-422d-86b0-f370a5a5be0c] Pending
helpers_test.go:344: "task-pv-pod" [cedea42e-94c6-422d-86b0-f370a5a5be0c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cedea42e-94c6-422d-86b0-f370a5a5be0c] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003416936s
addons_test.go:590: (dbg) Run:  kubectl --context addons-606058 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-606058 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-606058 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-606058 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-606058 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-606058 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-606058 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d0136da0-cbac-451e-ad8c-81c435fc0e1f] Pending
helpers_test.go:344: "task-pv-pod-restore" [d0136da0-cbac-451e-ad8c-81c435fc0e1f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d0136da0-cbac-451e-ad8c-81c435fc0e1f] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004027097s
addons_test.go:632: (dbg) Run:  kubectl --context addons-606058 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-606058 delete pod task-pv-pod-restore: (1.55060887s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-606058 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-606058 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-606058 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.823559548s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-606058 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-606058 --alsologtostderr -v=1: (1.136887652s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-4lmnb" [e933f70a-cd9c-453b-94da-2ece2098d6f1] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-4lmnb" [e933f70a-cd9c-453b-94da-2ece2098d6f1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-4lmnb" [e933f70a-cd9c-453b-94da-2ece2098d6f1] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00412989s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-606058 addons disable headlamp --alsologtostderr -v=1: (5.818442949s)
--- PASS: TestAddons/parallel/Headlamp (17.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-hxrwq" [24b6fdde-3897-4273-868b-05a1f5713ea7] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003800175s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-606058
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-606058 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-606058 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606058 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [09088bbb-5485-4b37-9894-bc159ef64927] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [09088bbb-5485-4b37-9894-bc159ef64927] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [09088bbb-5485-4b37-9894-bc159ef64927] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003738981s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-606058 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 ssh "cat /opt/local-path-provisioner/pvc-4991930f-cec7-458a-9b95-ebc2ea34657f_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-606058 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-606058 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-606058 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.730140312s)
--- PASS: TestAddons/parallel/LocalPath (52.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gvhjt" [51a2fbcb-34cf-48c0-bcb5-bf6371120839] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.011827381s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-606058
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-lxv5t" [3d343494-85ae-4eb6-a40f-5a3cdb7ff864] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004191773s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-606058 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-606058 addons disable yakd --alsologtostderr -v=1: (5.823281201s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-606058
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-606058: (12.030893557s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-606058
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-606058
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-606058
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (32.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-173362 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-173362 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (30.304937318s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-173362 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-173362 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-173362 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-173362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-173362
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-173362: (2.001959439s)
--- PASS: TestCertOptions (32.96s)

                                                
                                    
x
+
TestCertExpiration (227.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-985715 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-985715 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.544387346s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-985715 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-985715 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.217717207s)
helpers_test.go:175: Cleaning up "cert-expiration-985715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-985715
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-985715: (2.369391554s)
--- PASS: TestCertExpiration (227.13s)

                                                
                                    
x
+
TestForceSystemdFlag (42.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-719666 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-719666 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.451999381s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-719666 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-719666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-719666
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-719666: (4.640514908s)
--- PASS: TestForceSystemdFlag (42.50s)

                                                
                                    
x
+
TestForceSystemdEnv (43.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-393848 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-393848 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.594059354s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-393848 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-393848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-393848
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-393848: (2.263719176s)
--- PASS: TestForceSystemdEnv (43.26s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.25s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-681942 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-681942 --driver=docker  --container-runtime=containerd: (33.615337905s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-681942"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-681942": (1.005446187s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KbwMDraFLf07/agent.319441" SSH_AGENT_PID="319442" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KbwMDraFLf07/agent.319441" SSH_AGENT_PID="319442" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KbwMDraFLf07/agent.319441" SSH_AGENT_PID="319442" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.192782115s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KbwMDraFLf07/agent.319441" SSH_AGENT_PID="319442" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-681942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-681942
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-681942: (1.972053642s)
--- PASS: TestDockerEnvContainerd (49.25s)

                                                
                                    
x
+
TestErrorSpam/setup (29.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-732621 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-732621 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-732621 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-732621 --driver=docker  --container-runtime=containerd: (29.91120423s)
--- PASS: TestErrorSpam/setup (29.91s)

                                                
                                    
x
+
TestErrorSpam/start (1.03s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 start --dry-run
--- PASS: TestErrorSpam/start (1.03s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.16s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 unpause
--- PASS: TestErrorSpam/unpause (2.16s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 stop: (1.255997554s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-732621 --log_dir /tmp/nospam-732621 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19529-294791/.minikube/files/etc/test/nested/copy/300182/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160288 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-160288 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (51.798251353s)
--- PASS: TestFunctional/serial/StartWithProxy (51.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160288 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-160288 --alsologtostderr -v=8: (5.985721546s)
functional_test.go:663: soft start took 5.995123473s for "functional-160288" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-160288 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-160288 cache add registry.k8s.io/pause:3.1: (1.516242741s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-160288 cache add registry.k8s.io/pause:3.3: (1.522129644s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-160288 cache add registry.k8s.io/pause:latest: (1.221325837s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-160288 /tmp/TestFunctionalserialCacheCmdcacheadd_local873385377/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 cache add minikube-local-cache-test:functional-160288
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 cache delete minikube-local-cache-test:functional-160288
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-160288
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160288 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.964569ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-160288 cache reload: (1.18413042s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 kubectl -- --context functional-160288 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-160288 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160288 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-160288 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.829896005s)
functional_test.go:761: restart took 43.830009638s for "functional-160288" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-160288 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-160288 logs: (1.673592578s)
--- PASS: TestFunctional/serial/LogsCmd (1.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 logs --file /tmp/TestFunctionalserialLogsFileCmd1021257217/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-160288 logs --file /tmp/TestFunctionalserialLogsFileCmd1021257217/001/logs.txt: (1.697274891s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-160288 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-160288
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-160288: exit status 115 (666.173535ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31269 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-160288 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160288 config get cpus: exit status 14 (102.707507ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160288 config get cpus: exit status 14 (91.481323ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-160288 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-160288 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 336844: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.92s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-160288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (186.656226ms)

                                                
                                                
-- stdout --
	* [functional-160288] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:01:50.775303  336343 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:01:50.775475  336343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:01:50.775485  336343 out.go:358] Setting ErrFile to fd 2...
	I0828 18:01:50.775490  336343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:01:50.775738  336343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	I0828 18:01:50.776098  336343 out.go:352] Setting JSON to false
	I0828 18:01:50.777044  336343 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6260,"bootTime":1724861851,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0828 18:01:50.777121  336343 start.go:139] virtualization:  
	I0828 18:01:50.780094  336343 out.go:177] * [functional-160288] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0828 18:01:50.783527  336343 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:01:50.783595  336343 notify.go:220] Checking for updates...
	I0828 18:01:50.787965  336343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:01:50.790395  336343 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	I0828 18:01:50.792565  336343 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	I0828 18:01:50.795117  336343 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0828 18:01:50.798842  336343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:01:50.807742  336343 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0828 18:01:50.808525  336343 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:01:50.835065  336343 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 18:01:50.835203  336343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 18:01:50.900542  336343 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-28 18:01:50.88950336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 18:01:50.900683  336343 docker.go:307] overlay module found
	I0828 18:01:50.902704  336343 out.go:177] * Using the docker driver based on existing profile
	I0828 18:01:50.904675  336343 start.go:297] selected driver: docker
	I0828 18:01:50.904699  336343 start.go:901] validating driver "docker" against &{Name:functional-160288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-160288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:01:50.904838  336343 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:01:50.907138  336343 out.go:201] 
	W0828 18:01:50.908649  336343 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0828 18:01:50.910612  336343 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160288 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-160288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-160288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (268.527288ms)

                                                
                                                
-- stdout --
	* [functional-160288] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:01:51.292272  336466 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:01:51.292409  336466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:01:51.292415  336466 out.go:358] Setting ErrFile to fd 2...
	I0828 18:01:51.292427  336466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:01:51.292889  336466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	I0828 18:01:51.293377  336466 out.go:352] Setting JSON to false
	I0828 18:01:51.294553  336466 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6261,"bootTime":1724861851,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0828 18:01:51.294642  336466 start.go:139] virtualization:  
	I0828 18:01:51.297154  336466 out.go:177] * [functional-160288] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0828 18:01:51.298598  336466 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:01:51.298714  336466 notify.go:220] Checking for updates...
	I0828 18:01:51.303438  336466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:01:51.305963  336466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	I0828 18:01:51.308331  336466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	I0828 18:01:51.310044  336466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0828 18:01:51.313251  336466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:01:51.315654  336466 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0828 18:01:51.316256  336466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:01:51.366482  336466 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 18:01:51.366584  336466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 18:01:51.467957  336466 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-28 18:01:51.457796765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 18:01:51.468079  336466 docker.go:307] overlay module found
	I0828 18:01:51.470956  336466 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0828 18:01:51.473201  336466 start.go:297] selected driver: docker
	I0828 18:01:51.473252  336466 start.go:901] validating driver "docker" against &{Name:functional-160288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-160288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:01:51.473401  336466 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:01:51.476264  336466 out.go:201] 
	W0828 18:01:51.478354  336466 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0828 18:01:51.480865  336466 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-160288 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-160288 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-ksg5v" [486a8cd3-6a3e-4c41-bfb2-69353345c2cb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-ksg5v" [486a8cd3-6a3e-4c41-bfb2-69353345c2cb] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.004366585s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30874
functional_test.go:1675: http://192.168.49.2:30874: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-ksg5v

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30874
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6da580b1-e5b4-41d8-a57c-fc1a17d4be40] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003216937s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-160288 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-160288 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-160288 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-160288 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4b23aa11-51bf-46a8-ae26-7e0e68404e31] Pending
helpers_test.go:344: "sp-pod" [4b23aa11-51bf-46a8-ae26-7e0e68404e31] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4b23aa11-51bf-46a8-ae26-7e0e68404e31] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004305105s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-160288 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-160288 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-160288 delete -f testdata/storage-provisioner/pod.yaml: (1.748911282s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-160288 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b1fd617b-690f-4a72-8a7b-13cb8098c330] Pending
helpers_test.go:344: "sp-pod" [b1fd617b-690f-4a72-8a7b-13cb8098c330] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004275424s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-160288 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.83s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh -n functional-160288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 cp functional-160288:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2380725480/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh -n functional-160288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh -n functional-160288 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/300182/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "sudo cat /etc/test/nested/copy/300182/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/300182.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "sudo cat /etc/ssl/certs/300182.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/300182.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "sudo cat /usr/share/ca-certificates/300182.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3001822.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "sudo cat /etc/ssl/certs/3001822.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3001822.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "sudo cat /usr/share/ca-certificates/3001822.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-160288 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160288 ssh "sudo systemctl is-active docker": exit status 1 (336.38827ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160288 ssh "sudo systemctl is-active crio": exit status 1 (316.695532ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-160288 version -o=json --components: (1.331775405s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-160288 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-160288
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-160288
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-160288 image ls --format short --alsologtostderr:
I0828 18:01:53.674739  336959 out.go:345] Setting OutFile to fd 1 ...
I0828 18:01:53.674881  336959 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 18:01:53.674889  336959 out.go:358] Setting ErrFile to fd 2...
I0828 18:01:53.674894  336959 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 18:01:53.675182  336959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
I0828 18:01:53.675983  336959 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0828 18:01:53.676127  336959 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0828 18:01:53.676635  336959 cli_runner.go:164] Run: docker container inspect functional-160288 --format={{.State.Status}}
I0828 18:01:53.695025  336959 ssh_runner.go:195] Run: systemctl --version
I0828 18:01:53.695097  336959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-160288
I0828 18:01:53.713780  336959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/functional-160288/id_rsa Username:docker}
I0828 18:01:53.815935  336959 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-160288 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | alpine             | sha256:70594c | 19.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-160288  | sha256:881d69 | 991B   |
| docker.io/library/nginx                     | latest             | sha256:a9dfdb | 67.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-160288  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| localhost/my-image                          | functional-160288  | sha256:5db10f | 831kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-160288 image ls --format table --alsologtostderr:
I0828 18:01:58.507967  337405 out.go:345] Setting OutFile to fd 1 ...
I0828 18:01:58.508113  337405 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 18:01:58.508118  337405 out.go:358] Setting ErrFile to fd 2...
I0828 18:01:58.508123  337405 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 18:01:58.508366  337405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
I0828 18:01:58.509037  337405 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0828 18:01:58.509169  337405 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0828 18:01:58.509760  337405 cli_runner.go:164] Run: docker container inspect functional-160288 --format={{.State.Status}}
I0828 18:01:58.533144  337405 ssh_runner.go:195] Run: systemctl --version
I0828 18:01:58.533234  337405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-160288
I0828 18:01:58.560964  337405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/functional-160288/id_rsa Username:docker}
I0828 18:01:58.653241  337405 ssh_runner.go:195] Run: sudo crictl images --output json
2024/08/28 18:01:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image ls --format json --alsologtostderr
E0828 18:01:58.328061  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-160288 image ls --format json --alsologtostderr:
[{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-160288"],"size":"2173567"},{"id"
:"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3
c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:5db10f9700c39cf0e701e0a0f4a1a52d9bcef4cd0eaf68c764fe90deac6c3bbd","repoDigests":[],"repoTags":["localhost/my-image:functional-160288"],"size":"830618"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e
473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add"],"repoTags":["docker.io/library/nginx:latest"],"size":"67690150"},{"id":"sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19627164"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256
:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:881d697162a992f815059b728ed79c398eb3611158a9da6e879b13fecd16e3f2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-160288"],"size":"991"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-160288 image ls --format json --alsologtostderr:
I0828 18:01:58.244624  337369 out.go:345] Setting OutFile to fd 1 ...
I0828 18:01:58.244795  337369 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 18:01:58.244824  337369 out.go:358] Setting ErrFile to fd 2...
I0828 18:01:58.244843  337369 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 18:01:58.245087  337369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
I0828 18:01:58.245736  337369 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0828 18:01:58.245900  337369 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0828 18:01:58.246438  337369 cli_runner.go:164] Run: docker container inspect functional-160288 --format={{.State.Status}}
I0828 18:01:58.270045  337369 ssh_runner.go:195] Run: systemctl --version
I0828 18:01:58.270097  337369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-160288
I0828 18:01:58.296433  337369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/functional-160288/id_rsa Username:docker}
I0828 18:01:58.396340  337369 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-160288 image ls --format yaml --alsologtostderr:
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:881d697162a992f815059b728ed79c398eb3611158a9da6e879b13fecd16e3f2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-160288
size: "991"
- id: sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
repoTags:
- docker.io/library/nginx:latest
size: "67690150"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-160288
size: "2173567"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "19627164"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-160288 image ls --format yaml --alsologtostderr:
I0828 18:01:53.949524  337052 out.go:345] Setting OutFile to fd 1 ...
I0828 18:01:53.949721  337052 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 18:01:53.949733  337052 out.go:358] Setting ErrFile to fd 2...
I0828 18:01:53.949738  337052 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 18:01:53.950002  337052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
I0828 18:01:53.950640  337052 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0828 18:01:53.950805  337052 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0828 18:01:53.951355  337052 cli_runner.go:164] Run: docker container inspect functional-160288 --format={{.State.Status}}
I0828 18:01:53.977158  337052 ssh_runner.go:195] Run: systemctl --version
I0828 18:01:53.977209  337052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-160288
I0828 18:01:53.996151  337052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/functional-160288/id_rsa Username:docker}
I0828 18:01:54.096265  337052 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160288 ssh pgrep buildkitd: exit status 1 (336.656854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image build -t localhost/my-image:functional-160288 testdata/build --alsologtostderr
E0828 18:01:55.757936  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:01:55.765064  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:01:55.777067  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:01:55.798810  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:01:55.840358  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:01:55.922132  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:01:56.083707  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:01:56.405205  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:01:57.046815  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-160288 image build -t localhost/my-image:functional-160288 testdata/build --alsologtostderr: (3.404331622s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-160288 image build -t localhost/my-image:functional-160288 testdata/build --alsologtostderr:
I0828 18:01:54.572111  337141 out.go:345] Setting OutFile to fd 1 ...
I0828 18:01:54.572564  337141 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 18:01:54.572577  337141 out.go:358] Setting ErrFile to fd 2...
I0828 18:01:54.572584  337141 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 18:01:54.572837  337141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
I0828 18:01:54.573501  337141 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0828 18:01:54.574184  337141 config.go:182] Loaded profile config "functional-160288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0828 18:01:54.574819  337141 cli_runner.go:164] Run: docker container inspect functional-160288 --format={{.State.Status}}
I0828 18:01:54.595885  337141 ssh_runner.go:195] Run: systemctl --version
I0828 18:01:54.595943  337141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-160288
I0828 18:01:54.623648  337141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/functional-160288/id_rsa Username:docker}
I0828 18:01:54.726401  337141 build_images.go:161] Building image from path: /tmp/build.1958143842.tar
I0828 18:01:54.726482  337141 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0828 18:01:54.737189  337141 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1958143842.tar
I0828 18:01:54.743135  337141 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1958143842.tar: stat -c "%s %y" /var/lib/minikube/build/build.1958143842.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1958143842.tar': No such file or directory
I0828 18:01:54.743167  337141 ssh_runner.go:362] scp /tmp/build.1958143842.tar --> /var/lib/minikube/build/build.1958143842.tar (3072 bytes)
I0828 18:01:54.778237  337141 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1958143842
I0828 18:01:54.787993  337141 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1958143842 -xf /var/lib/minikube/build/build.1958143842.tar
I0828 18:01:54.801356  337141 containerd.go:394] Building image: /var/lib/minikube/build/build.1958143842
I0828 18:01:54.801460  337141 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1958143842 --local dockerfile=/var/lib/minikube/build/build.1958143842 --output type=image,name=localhost/my-image:functional-160288
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:1a9b7771f2e8c1d8ead3b90f39a09395dc4120029d0a6f73875bc7a326461631
#8 exporting manifest sha256:1a9b7771f2e8c1d8ead3b90f39a09395dc4120029d0a6f73875bc7a326461631 0.0s done
#8 exporting config sha256:5db10f9700c39cf0e701e0a0f4a1a52d9bcef4cd0eaf68c764fe90deac6c3bbd 0.0s done
#8 naming to localhost/my-image:functional-160288 done
#8 DONE 0.2s
I0828 18:01:57.860225  337141 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1958143842 --local dockerfile=/var/lib/minikube/build/build.1958143842 --output type=image,name=localhost/my-image:functional-160288: (3.05873211s)
I0828 18:01:57.860315  337141 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1958143842
I0828 18:01:57.871897  337141 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1958143842.tar
I0828 18:01:57.888158  337141 build_images.go:217] Built localhost/my-image:functional-160288 from /tmp/build.1958143842.tar
I0828 18:01:57.888235  337141 build_images.go:133] succeeded building to: functional-160288
I0828 18:01:57.888257  337141 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-160288
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image load --daemon kicbase/echo-server:functional-160288 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-160288 image load --daemon kicbase/echo-server:functional-160288 --alsologtostderr: (1.177458276s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image load --daemon kicbase/echo-server:functional-160288 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-160288 image load --daemon kicbase/echo-server:functional-160288 --alsologtostderr: (1.172359156s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-160288 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-160288 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-fvxmn" [51f9ba5c-8653-4d1d-824f-e350892f4de4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-fvxmn" [51f9ba5c-8653-4d1d-824f-e350892f4de4] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004056127s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-160288
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image load --daemon kicbase/echo-server:functional-160288 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-160288 image load --daemon kicbase/echo-server:functional-160288 --alsologtostderr: (1.173878213s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image save kicbase/echo-server:functional-160288 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image rm kicbase/echo-server:functional-160288 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-160288
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 image save --daemon kicbase/echo-server:functional-160288 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-160288
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-160288 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-160288 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-160288 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-160288 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 333097: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-160288 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-160288 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fcbb1934-f8ba-42c8-a34d-8f88ce0edfda] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fcbb1934-f8ba-42c8-a34d-8f88ce0edfda] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003956238s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 service list -o json
functional_test.go:1494: Took "328.604373ms" to run "out/minikube-linux-arm64 -p functional-160288 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32358
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32358
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-160288 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.241.239 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-160288 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "325.739023ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "59.29042ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "321.62917ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "56.23656ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-160288 /tmp/TestFunctionalparallelMountCmdany-port3785601461/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724868098850401903" to /tmp/TestFunctionalparallelMountCmdany-port3785601461/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724868098850401903" to /tmp/TestFunctionalparallelMountCmdany-port3785601461/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724868098850401903" to /tmp/TestFunctionalparallelMountCmdany-port3785601461/001/test-1724868098850401903
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160288 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.480687ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 28 18:01 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 28 18:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 28 18:01 test-1724868098850401903
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh cat /mount-9p/test-1724868098850401903
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-160288 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8ead44e3-9554-47ae-aff3-f84dd0c1034c] Pending
helpers_test.go:344: "busybox-mount" [8ead44e3-9554-47ae-aff3-f84dd0c1034c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8ead44e3-9554-47ae-aff3-f84dd0c1034c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8ead44e3-9554-47ae-aff3-f84dd0c1034c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004830821s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-160288 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160288 /tmp/TestFunctionalparallelMountCmdany-port3785601461/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-160288 /tmp/TestFunctionalparallelMountCmdspecific-port4013969349/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160288 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (331.042502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160288 /tmp/TestFunctionalparallelMountCmdspecific-port4013969349/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160288 ssh "sudo umount -f /mount-9p": exit status 1 (269.137181ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-160288 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160288 /tmp/TestFunctionalparallelMountCmdspecific-port4013969349/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-160288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3618539088/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-160288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3618539088/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-160288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3618539088/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-160288 ssh "findmnt -T" /mount1: exit status 1 (537.397609ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-160288 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-160288 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3618539088/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3618539088/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-160288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3618539088/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-160288
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-160288
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-160288
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (112.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-932184 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0828 18:02:06.012261  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:02:16.253563  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:02:36.735540  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:03:17.697752  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-932184 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m51.523543969s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (112.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (29.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-932184 -- rollout status deployment/busybox: (26.976510373s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-9p29s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-dhxhp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-fmkpg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-9p29s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-dhxhp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-fmkpg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-9p29s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-dhxhp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-fmkpg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (29.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-9p29s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-9p29s -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-dhxhp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-dhxhp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-fmkpg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-932184 -- exec busybox-7dff88458-fmkpg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-932184 -v=7 --alsologtostderr
E0828 18:04:39.619511  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-932184 -v=7 --alsologtostderr: (22.782133633s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-932184 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp testdata/cp-test.txt ha-932184:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2725359821/001/cp-test_ha-932184.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184:/home/docker/cp-test.txt ha-932184-m02:/home/docker/cp-test_ha-932184_ha-932184-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m02 "sudo cat /home/docker/cp-test_ha-932184_ha-932184-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184:/home/docker/cp-test.txt ha-932184-m03:/home/docker/cp-test_ha-932184_ha-932184-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m03 "sudo cat /home/docker/cp-test_ha-932184_ha-932184-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184:/home/docker/cp-test.txt ha-932184-m04:/home/docker/cp-test_ha-932184_ha-932184-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m04 "sudo cat /home/docker/cp-test_ha-932184_ha-932184-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp testdata/cp-test.txt ha-932184-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2725359821/001/cp-test_ha-932184-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m02:/home/docker/cp-test.txt ha-932184:/home/docker/cp-test_ha-932184-m02_ha-932184.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184 "sudo cat /home/docker/cp-test_ha-932184-m02_ha-932184.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m02:/home/docker/cp-test.txt ha-932184-m03:/home/docker/cp-test_ha-932184-m02_ha-932184-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m03 "sudo cat /home/docker/cp-test_ha-932184-m02_ha-932184-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m02:/home/docker/cp-test.txt ha-932184-m04:/home/docker/cp-test_ha-932184-m02_ha-932184-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m04 "sudo cat /home/docker/cp-test_ha-932184-m02_ha-932184-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp testdata/cp-test.txt ha-932184-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2725359821/001/cp-test_ha-932184-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m03:/home/docker/cp-test.txt ha-932184:/home/docker/cp-test_ha-932184-m03_ha-932184.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184 "sudo cat /home/docker/cp-test_ha-932184-m03_ha-932184.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m03:/home/docker/cp-test.txt ha-932184-m02:/home/docker/cp-test_ha-932184-m03_ha-932184-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m02 "sudo cat /home/docker/cp-test_ha-932184-m03_ha-932184-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m03:/home/docker/cp-test.txt ha-932184-m04:/home/docker/cp-test_ha-932184-m03_ha-932184-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m04 "sudo cat /home/docker/cp-test_ha-932184-m03_ha-932184-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp testdata/cp-test.txt ha-932184-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2725359821/001/cp-test_ha-932184-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m04:/home/docker/cp-test.txt ha-932184:/home/docker/cp-test_ha-932184-m04_ha-932184.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184 "sudo cat /home/docker/cp-test_ha-932184-m04_ha-932184.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m04:/home/docker/cp-test.txt ha-932184-m02:/home/docker/cp-test_ha-932184-m04_ha-932184-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m02 "sudo cat /home/docker/cp-test_ha-932184-m04_ha-932184-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 cp ha-932184-m04:/home/docker/cp-test.txt ha-932184-m03:/home/docker/cp-test_ha-932184-m04_ha-932184-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 ssh -n ha-932184-m03 "sudo cat /home/docker/cp-test_ha-932184-m04_ha-932184-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-932184 node stop m02 -v=7 --alsologtostderr: (12.105695529s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-932184 status -v=7 --alsologtostderr: exit status 7 (743.908348ms)

                                                
                                                
-- stdout --
	ha-932184
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-932184-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-932184-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-932184-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:05:21.766325  353624 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:05:21.766454  353624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:05:21.766465  353624 out.go:358] Setting ErrFile to fd 2...
	I0828 18:05:21.766470  353624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:05:21.766841  353624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	I0828 18:05:21.767061  353624 out.go:352] Setting JSON to false
	I0828 18:05:21.767117  353624 mustload.go:65] Loading cluster: ha-932184
	I0828 18:05:21.767198  353624 notify.go:220] Checking for updates...
	I0828 18:05:21.767698  353624 config.go:182] Loaded profile config "ha-932184": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0828 18:05:21.767720  353624 status.go:255] checking status of ha-932184 ...
	I0828 18:05:21.768228  353624 cli_runner.go:164] Run: docker container inspect ha-932184 --format={{.State.Status}}
	I0828 18:05:21.787309  353624 status.go:330] ha-932184 host status = "Running" (err=<nil>)
	I0828 18:05:21.787330  353624 host.go:66] Checking if "ha-932184" exists ...
	I0828 18:05:21.787797  353624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-932184
	I0828 18:05:21.821283  353624 host.go:66] Checking if "ha-932184" exists ...
	I0828 18:05:21.821599  353624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 18:05:21.821645  353624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-932184
	I0828 18:05:21.838524  353624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/ha-932184/id_rsa Username:docker}
	I0828 18:05:21.932905  353624 ssh_runner.go:195] Run: systemctl --version
	I0828 18:05:21.937437  353624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:05:21.948867  353624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 18:05:22.032020  353624 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-28 18:05:22.020829213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 18:05:22.032667  353624 kubeconfig.go:125] found "ha-932184" server: "https://192.168.49.254:8443"
	I0828 18:05:22.032702  353624 api_server.go:166] Checking apiserver status ...
	I0828 18:05:22.032755  353624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:05:22.045671  353624 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1520/cgroup
	I0828 18:05:22.060968  353624 api_server.go:182] apiserver freezer: "9:freezer:/docker/c401dcae1fa192b5a5a23910889880987bd21e33a9445072c5a4b80c258ebd95/kubepods/burstable/pod1c3e05c7c41720fcae304be73ccf2c95/17116830aee21f3d151c9186a1921fea6aa93b874babbcdbc01ca1dd4e7ba232"
	I0828 18:05:22.061043  353624 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c401dcae1fa192b5a5a23910889880987bd21e33a9445072c5a4b80c258ebd95/kubepods/burstable/pod1c3e05c7c41720fcae304be73ccf2c95/17116830aee21f3d151c9186a1921fea6aa93b874babbcdbc01ca1dd4e7ba232/freezer.state
	I0828 18:05:22.071445  353624 api_server.go:204] freezer state: "THAWED"
	I0828 18:05:22.071490  353624 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0828 18:05:22.079597  353624 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0828 18:05:22.079627  353624 status.go:422] ha-932184 apiserver status = Running (err=<nil>)
	I0828 18:05:22.079640  353624 status.go:257] ha-932184 status: &{Name:ha-932184 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 18:05:22.079666  353624 status.go:255] checking status of ha-932184-m02 ...
	I0828 18:05:22.079996  353624 cli_runner.go:164] Run: docker container inspect ha-932184-m02 --format={{.State.Status}}
	I0828 18:05:22.102692  353624 status.go:330] ha-932184-m02 host status = "Stopped" (err=<nil>)
	I0828 18:05:22.102718  353624 status.go:343] host is not running, skipping remaining checks
	I0828 18:05:22.102725  353624 status.go:257] ha-932184-m02 status: &{Name:ha-932184-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 18:05:22.102745  353624 status.go:255] checking status of ha-932184-m03 ...
	I0828 18:05:22.103059  353624 cli_runner.go:164] Run: docker container inspect ha-932184-m03 --format={{.State.Status}}
	I0828 18:05:22.121218  353624 status.go:330] ha-932184-m03 host status = "Running" (err=<nil>)
	I0828 18:05:22.121244  353624 host.go:66] Checking if "ha-932184-m03" exists ...
	I0828 18:05:22.121555  353624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-932184-m03
	I0828 18:05:22.139562  353624 host.go:66] Checking if "ha-932184-m03" exists ...
	I0828 18:05:22.139933  353624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 18:05:22.139983  353624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-932184-m03
	I0828 18:05:22.156391  353624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/ha-932184-m03/id_rsa Username:docker}
	I0828 18:05:22.248693  353624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:05:22.261208  353624 kubeconfig.go:125] found "ha-932184" server: "https://192.168.49.254:8443"
	I0828 18:05:22.261245  353624 api_server.go:166] Checking apiserver status ...
	I0828 18:05:22.261296  353624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:05:22.272445  353624 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1341/cgroup
	I0828 18:05:22.281549  353624 api_server.go:182] apiserver freezer: "9:freezer:/docker/37baca14f7e92c1e7073e05a5819307b0373f67fee1435cbbba14c65856b8364/kubepods/burstable/pod7b6f1e068b1b0fb4dcb2eafcb59c4400/b77d1a856c7226bc56ff795735091d705d982d2cfcd7caf0b3f256426f2aed9f"
	I0828 18:05:22.281633  353624 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/37baca14f7e92c1e7073e05a5819307b0373f67fee1435cbbba14c65856b8364/kubepods/burstable/pod7b6f1e068b1b0fb4dcb2eafcb59c4400/b77d1a856c7226bc56ff795735091d705d982d2cfcd7caf0b3f256426f2aed9f/freezer.state
	I0828 18:05:22.291057  353624 api_server.go:204] freezer state: "THAWED"
	I0828 18:05:22.291086  353624 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0828 18:05:22.298765  353624 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0828 18:05:22.298794  353624 status.go:422] ha-932184-m03 apiserver status = Running (err=<nil>)
	I0828 18:05:22.298803  353624 status.go:257] ha-932184-m03 status: &{Name:ha-932184-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 18:05:22.298835  353624 status.go:255] checking status of ha-932184-m04 ...
	I0828 18:05:22.299146  353624 cli_runner.go:164] Run: docker container inspect ha-932184-m04 --format={{.State.Status}}
	I0828 18:05:22.315449  353624 status.go:330] ha-932184-m04 host status = "Running" (err=<nil>)
	I0828 18:05:22.315477  353624 host.go:66] Checking if "ha-932184-m04" exists ...
	I0828 18:05:22.315787  353624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-932184-m04
	I0828 18:05:22.335578  353624 host.go:66] Checking if "ha-932184-m04" exists ...
	I0828 18:05:22.335923  353624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 18:05:22.335975  353624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-932184-m04
	I0828 18:05:22.356205  353624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/ha-932184-m04/id_rsa Username:docker}
	I0828 18:05:22.448939  353624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:05:22.460510  353624 status.go:257] ha-932184-m04 status: &{Name:ha-932184-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-932184 node start m02 -v=7 --alsologtostderr: (17.445828318s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (141.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-932184 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-932184 -v=7 --alsologtostderr
E0828 18:06:15.735007  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:15.741461  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:15.752933  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:15.774588  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:15.816157  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:15.897708  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:16.059490  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:16.381311  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:17.023424  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:18.305283  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-932184 -v=7 --alsologtostderr: (37.218687881s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-932184 --wait=true -v=7 --alsologtostderr
E0828 18:06:20.867300  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:25.989225  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:36.231480  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:55.757189  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:56.713515  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:07:23.461463  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:07:37.675621  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-932184 --wait=true -v=7 --alsologtostderr: (1m43.658712025s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-932184
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (141.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-932184 node delete m03 -v=7 --alsologtostderr: (9.633708399s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-932184 stop -v=7 --alsologtostderr: (35.973231471s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-932184 status -v=7 --alsologtostderr: exit status 7 (115.999011ms)

                                                
                                                
-- stdout --
	ha-932184
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-932184-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-932184-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:08:50.496351  367903 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:08:50.496482  367903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:08:50.496518  367903 out.go:358] Setting ErrFile to fd 2...
	I0828 18:08:50.496532  367903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:08:50.496761  367903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	I0828 18:08:50.496951  367903 out.go:352] Setting JSON to false
	I0828 18:08:50.496994  367903 mustload.go:65] Loading cluster: ha-932184
	I0828 18:08:50.497076  367903 notify.go:220] Checking for updates...
	I0828 18:08:50.497417  367903 config.go:182] Loaded profile config "ha-932184": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0828 18:08:50.497428  367903 status.go:255] checking status of ha-932184 ...
	I0828 18:08:50.497935  367903 cli_runner.go:164] Run: docker container inspect ha-932184 --format={{.State.Status}}
	I0828 18:08:50.516439  367903 status.go:330] ha-932184 host status = "Stopped" (err=<nil>)
	I0828 18:08:50.516465  367903 status.go:343] host is not running, skipping remaining checks
	I0828 18:08:50.516473  367903 status.go:257] ha-932184 status: &{Name:ha-932184 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 18:08:50.516497  367903 status.go:255] checking status of ha-932184-m02 ...
	I0828 18:08:50.516806  367903 cli_runner.go:164] Run: docker container inspect ha-932184-m02 --format={{.State.Status}}
	I0828 18:08:50.547513  367903 status.go:330] ha-932184-m02 host status = "Stopped" (err=<nil>)
	I0828 18:08:50.547549  367903 status.go:343] host is not running, skipping remaining checks
	I0828 18:08:50.547569  367903 status.go:257] ha-932184-m02 status: &{Name:ha-932184-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 18:08:50.547589  367903 status.go:255] checking status of ha-932184-m04 ...
	I0828 18:08:50.548022  367903 cli_runner.go:164] Run: docker container inspect ha-932184-m04 --format={{.State.Status}}
	I0828 18:08:50.565791  367903 status.go:330] ha-932184-m04 host status = "Stopped" (err=<nil>)
	I0828 18:08:50.565816  367903 status.go:343] host is not running, skipping remaining checks
	I0828 18:08:50.565825  367903 status.go:257] ha-932184-m04 status: &{Name:ha-932184-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-932184 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0828 18:08:59.597468  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-932184 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.902279228s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-932184 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-932184 --control-plane -v=7 --alsologtostderr: (37.03120823s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-932184 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-932184 status -v=7 --alsologtostderr: (1.014500319s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-129482 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0828 18:11:15.734190  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-129482 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (53.000280418s)
--- PASS: TestJSONOutput/start/Command (53.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-129482 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-129482 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-129482 --output=json --user=testUser
E0828 18:11:43.438837  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-129482 --output=json --user=testUser: (5.764823982s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-522702 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-522702 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.634847ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7533adea-3191-424e-979b-47ea64888472","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-522702] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"debb2bf4-ab2f-4bde-8e1f-02f5bac8f4de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19529"}}
	{"specversion":"1.0","id":"5540c28d-f2f9-4f66-9c1f-dcab326244fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7dd85eaa-90c2-4ef1-a102-ebc133aa2c76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig"}}
	{"specversion":"1.0","id":"63f8c509-4b4c-44f7-90df-409f124773de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube"}}
	{"specversion":"1.0","id":"180e8b14-76cd-4832-a364-dc4f8b8c7b79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ce98f179-2823-4487-a84d-5b3f46870976","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"24eeda2f-572c-4a5d-bfc2-d89e40d50e82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-522702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-522702
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-457683 --network=
E0828 18:11:55.758470  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-457683 --network=: (39.630582339s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-457683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-457683
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-457683: (2.026202978s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.68s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-991595 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-991595 --network=bridge: (30.929006953s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-991595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-991595
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-991595: (1.948389626s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.90s)

                                                
                                    
x
+
TestKicExistingNetwork (34.41s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-278044 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-278044 --network=existing-network: (32.200798754s)
helpers_test.go:175: Cleaning up "existing-network-278044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-278044
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-278044: (2.042618773s)
--- PASS: TestKicExistingNetwork (34.41s)

                                                
                                    
x
+
TestKicCustomSubnet (36.04s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-703365 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-703365 --subnet=192.168.60.0/24: (34.009958755s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-703365 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-703365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-703365
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-703365: (2.00971972s)
--- PASS: TestKicCustomSubnet (36.04s)

                                                
                                    
x
+
TestKicStaticIP (32.23s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-724724 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-724724 --static-ip=192.168.200.200: (30.060695839s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-724724 ip
helpers_test.go:175: Cleaning up "static-ip-724724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-724724
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-724724: (2.028838005s)
--- PASS: TestKicStaticIP (32.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-016133 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-016133 --driver=docker  --container-runtime=containerd: (32.589273108s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-019087 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-019087 --driver=docker  --container-runtime=containerd: (30.139518272s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-016133
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-019087
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-019087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-019087
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-019087: (1.919510998s)
helpers_test.go:175: Cleaning up "first-016133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-016133
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-016133: (2.21371659s)
--- PASS: TestMinikubeProfile (68.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-414113 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-414113 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.401083981s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-414113 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-426904 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-426904 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.606567009s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-426904 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-414113 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-414113 --alsologtostderr -v=5: (1.602125908s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-426904 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-426904
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-426904: (1.199956657s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-426904
E0828 18:16:15.732947  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-426904: (6.536259361s)
--- PASS: TestMountStart/serial/RestartStopped (7.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-426904 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-804721 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0828 18:16:55.757254  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-804721 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.617737593s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-804721 -- rollout status deployment/busybox: (15.323933402s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- exec busybox-7dff88458-956mz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- exec busybox-7dff88458-fc49r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- exec busybox-7dff88458-956mz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- exec busybox-7dff88458-fc49r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- exec busybox-7dff88458-956mz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- exec busybox-7dff88458-fc49r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- exec busybox-7dff88458-956mz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- exec busybox-7dff88458-956mz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- exec busybox-7dff88458-fc49r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-804721 -- exec busybox-7dff88458-fc49r -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-804721 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-804721 -v 3 --alsologtostderr: (15.810473017s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-804721 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp testdata/cp-test.txt multinode-804721:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp multinode-804721:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3322110693/001/cp-test_multinode-804721.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp multinode-804721:/home/docker/cp-test.txt multinode-804721-m02:/home/docker/cp-test_multinode-804721_multinode-804721-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m02 "sudo cat /home/docker/cp-test_multinode-804721_multinode-804721-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp multinode-804721:/home/docker/cp-test.txt multinode-804721-m03:/home/docker/cp-test_multinode-804721_multinode-804721-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m03 "sudo cat /home/docker/cp-test_multinode-804721_multinode-804721-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp testdata/cp-test.txt multinode-804721-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp multinode-804721-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3322110693/001/cp-test_multinode-804721-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp multinode-804721-m02:/home/docker/cp-test.txt multinode-804721:/home/docker/cp-test_multinode-804721-m02_multinode-804721.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721 "sudo cat /home/docker/cp-test_multinode-804721-m02_multinode-804721.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp multinode-804721-m02:/home/docker/cp-test.txt multinode-804721-m03:/home/docker/cp-test_multinode-804721-m02_multinode-804721-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m03 "sudo cat /home/docker/cp-test_multinode-804721-m02_multinode-804721-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp testdata/cp-test.txt multinode-804721-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp multinode-804721-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3322110693/001/cp-test_multinode-804721-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp multinode-804721-m03:/home/docker/cp-test.txt multinode-804721:/home/docker/cp-test_multinode-804721-m03_multinode-804721.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721 "sudo cat /home/docker/cp-test_multinode-804721-m03_multinode-804721.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 cp multinode-804721-m03:/home/docker/cp-test.txt multinode-804721-m02:/home/docker/cp-test_multinode-804721-m03_multinode-804721-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 ssh -n multinode-804721-m02 "sudo cat /home/docker/cp-test_multinode-804721-m03_multinode-804721-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-804721 node stop m03: (1.240051077s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 status
E0828 18:18:18.823735  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-804721 status: exit status 7 (518.501336ms)

                                                
                                                
-- stdout --
	multinode-804721
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-804721-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-804721-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-804721 status --alsologtostderr: exit status 7 (537.230275ms)

                                                
                                                
-- stdout --
	multinode-804721
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-804721-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-804721-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:18:18.919520  421451 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:18:18.919689  421451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:18.919698  421451 out.go:358] Setting ErrFile to fd 2...
	I0828 18:18:18.919703  421451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:18.919962  421451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	I0828 18:18:18.920161  421451 out.go:352] Setting JSON to false
	I0828 18:18:18.920206  421451 mustload.go:65] Loading cluster: multinode-804721
	I0828 18:18:18.920281  421451 notify.go:220] Checking for updates...
	I0828 18:18:18.920657  421451 config.go:182] Loaded profile config "multinode-804721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0828 18:18:18.920673  421451 status.go:255] checking status of multinode-804721 ...
	I0828 18:18:18.921325  421451 cli_runner.go:164] Run: docker container inspect multinode-804721 --format={{.State.Status}}
	I0828 18:18:18.940095  421451 status.go:330] multinode-804721 host status = "Running" (err=<nil>)
	I0828 18:18:18.940120  421451 host.go:66] Checking if "multinode-804721" exists ...
	I0828 18:18:18.940425  421451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-804721
	I0828 18:18:18.969101  421451 host.go:66] Checking if "multinode-804721" exists ...
	I0828 18:18:18.969456  421451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 18:18:18.969509  421451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-804721
	I0828 18:18:18.986320  421451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/multinode-804721/id_rsa Username:docker}
	I0828 18:18:19.096722  421451 ssh_runner.go:195] Run: systemctl --version
	I0828 18:18:19.100906  421451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:18:19.112637  421451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 18:18:19.174843  421451 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-28 18:18:19.16503987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 18:18:19.175489  421451 kubeconfig.go:125] found "multinode-804721" server: "https://192.168.67.2:8443"
	I0828 18:18:19.175522  421451 api_server.go:166] Checking apiserver status ...
	I0828 18:18:19.175564  421451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:18:19.187817  421451 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1479/cgroup
	I0828 18:18:19.200779  421451 api_server.go:182] apiserver freezer: "9:freezer:/docker/bcb30095ef2e5df09d76f2eab430c71893784700751f9c8605832e925bac47a0/kubepods/burstable/pod6df266f98ad36de84d4a444435ff5b95/d28a87fa08ea8d17c9687af37f07019311ad8156260d10a9a68b464dce0a8c3d"
	I0828 18:18:19.200886  421451 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bcb30095ef2e5df09d76f2eab430c71893784700751f9c8605832e925bac47a0/kubepods/burstable/pod6df266f98ad36de84d4a444435ff5b95/d28a87fa08ea8d17c9687af37f07019311ad8156260d10a9a68b464dce0a8c3d/freezer.state
	I0828 18:18:19.211136  421451 api_server.go:204] freezer state: "THAWED"
	I0828 18:18:19.211163  421451 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0828 18:18:19.219288  421451 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0828 18:18:19.219325  421451 status.go:422] multinode-804721 apiserver status = Running (err=<nil>)
	I0828 18:18:19.219338  421451 status.go:257] multinode-804721 status: &{Name:multinode-804721 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 18:18:19.219355  421451 status.go:255] checking status of multinode-804721-m02 ...
	I0828 18:18:19.219680  421451 cli_runner.go:164] Run: docker container inspect multinode-804721-m02 --format={{.State.Status}}
	I0828 18:18:19.236717  421451 status.go:330] multinode-804721-m02 host status = "Running" (err=<nil>)
	I0828 18:18:19.236742  421451 host.go:66] Checking if "multinode-804721-m02" exists ...
	I0828 18:18:19.237070  421451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-804721-m02
	I0828 18:18:19.252750  421451 host.go:66] Checking if "multinode-804721-m02" exists ...
	I0828 18:18:19.253096  421451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 18:18:19.253148  421451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-804721-m02
	I0828 18:18:19.270694  421451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/19529-294791/.minikube/machines/multinode-804721-m02/id_rsa Username:docker}
	I0828 18:18:19.364320  421451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:18:19.385930  421451 status.go:257] multinode-804721-m02 status: &{Name:multinode-804721-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0828 18:18:19.385965  421451 status.go:255] checking status of multinode-804721-m03 ...
	I0828 18:18:19.386286  421451 cli_runner.go:164] Run: docker container inspect multinode-804721-m03 --format={{.State.Status}}
	I0828 18:18:19.403164  421451 status.go:330] multinode-804721-m03 host status = "Stopped" (err=<nil>)
	I0828 18:18:19.403199  421451 status.go:343] host is not running, skipping remaining checks
	I0828 18:18:19.403208  421451 status.go:257] multinode-804721-m03 status: &{Name:multinode-804721-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-804721 node start m03 -v=7 --alsologtostderr: (8.905336762s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-804721
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-804721
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-804721: (24.997558047s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-804721 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-804721 --wait=true -v=8 --alsologtostderr: (1m1.54897764s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-804721
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-804721 node delete m03: (5.271495609s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-804721 stop: (23.858752317s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-804721 status: exit status 7 (88.53777ms)

                                                
                                                
-- stdout --
	multinode-804721
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-804721-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-804721 status --alsologtostderr: exit status 7 (79.352551ms)

                                                
                                                
-- stdout --
	multinode-804721
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-804721-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:20:25.677852  429894 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:20:25.678042  429894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:20:25.678072  429894 out.go:358] Setting ErrFile to fd 2...
	I0828 18:20:25.678094  429894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:20:25.678369  429894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	I0828 18:20:25.678577  429894 out.go:352] Setting JSON to false
	I0828 18:20:25.678647  429894 mustload.go:65] Loading cluster: multinode-804721
	I0828 18:20:25.678730  429894 notify.go:220] Checking for updates...
	I0828 18:20:25.679106  429894 config.go:182] Loaded profile config "multinode-804721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0828 18:20:25.679144  429894 status.go:255] checking status of multinode-804721 ...
	I0828 18:20:25.679733  429894 cli_runner.go:164] Run: docker container inspect multinode-804721 --format={{.State.Status}}
	I0828 18:20:25.697004  429894 status.go:330] multinode-804721 host status = "Stopped" (err=<nil>)
	I0828 18:20:25.697026  429894 status.go:343] host is not running, skipping remaining checks
	I0828 18:20:25.697033  429894 status.go:257] multinode-804721 status: &{Name:multinode-804721 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 18:20:25.697059  429894 status.go:255] checking status of multinode-804721-m02 ...
	I0828 18:20:25.697384  429894 cli_runner.go:164] Run: docker container inspect multinode-804721-m02 --format={{.State.Status}}
	I0828 18:20:25.713063  429894 status.go:330] multinode-804721-m02 host status = "Stopped" (err=<nil>)
	I0828 18:20:25.713084  429894 status.go:343] host is not running, skipping remaining checks
	I0828 18:20:25.713091  429894 status.go:257] multinode-804721-m02 status: &{Name:multinode-804721-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-804721 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-804721 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.381906793s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-804721 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
E0828 18:21:15.732098  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-804721
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-804721-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-804721-m02 --driver=docker  --container-runtime=containerd: exit status 14 (75.662622ms)

                                                
                                                
-- stdout --
	* [multinode-804721-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-804721-m02' is duplicated with machine name 'multinode-804721-m02' in profile 'multinode-804721'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-804721-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-804721-m03 --driver=docker  --container-runtime=containerd: (31.555063716s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-804721
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-804721: exit status 80 (322.426231ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-804721 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-804721-m03 already exists in multinode-804721-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-804721-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-804721-m03: (1.937858536s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.95s)

                                                
                                    
x
+
TestPreload (122.31s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-841891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0828 18:21:55.757296  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:22:38.801037  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-841891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m10.57313877s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-841891 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-841891 image pull gcr.io/k8s-minikube/busybox: (2.254898864s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-841891
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-841891: (12.067808369s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-841891 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-841891 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (34.810485083s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-841891 image list
helpers_test.go:175: Cleaning up "test-preload-841891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-841891
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-841891: (2.358452907s)
--- PASS: TestPreload (122.31s)

                                                
                                    
x
+
TestScheduledStopUnix (106.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-795525 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-795525 --memory=2048 --driver=docker  --container-runtime=containerd: (30.848877247s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-795525 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-795525 -n scheduled-stop-795525
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-795525 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-795525 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-795525 -n scheduled-stop-795525
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-795525
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-795525 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-795525
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-795525: exit status 7 (64.036078ms)

                                                
                                                
-- stdout --
	scheduled-stop-795525
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-795525 -n scheduled-stop-795525
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-795525 -n scheduled-stop-795525: exit status 7 (69.461202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-795525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-795525
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-795525: (4.106298597s)
--- PASS: TestScheduledStopUnix (106.45s)

                                                
                                    
x
+
TestInsufficientStorage (13.22s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-771193 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-771193 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.7553735s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7b0b6f00-19c5-4c11-bbe1-79822f363278","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-771193] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9aa8066a-771b-4ed3-99a4-a529dd4d69f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19529"}}
	{"specversion":"1.0","id":"999e327f-cf08-4e22-948c-d46009b4057a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e726dd70-a087-4356-8a2b-d073ae547040","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig"}}
	{"specversion":"1.0","id":"81d78808-3183-4e52-90ba-f84453b5bb7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube"}}
	{"specversion":"1.0","id":"4f0f157e-2544-41e2-80a2-c27eaa0787eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f8164de4-1202-49f6-b844-d506d44f27e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"29be8114-ecab-4dd7-98e0-2c6e9cb80452","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1e423668-2a7d-4e74-919b-38cff4663228","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"855e2f0f-4ca1-4822-b891-4e9af1c3511f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"288f5382-39ca-4699-873f-4ce8558bd5b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b2ad3de4-10c1-4630-9798-09efb204843c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-771193\" primary control-plane node in \"insufficient-storage-771193\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"83aecc54-a6cc-4521-9ddd-dbc872efa1bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724775115-19521 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1d4b10a-49f3-47d7-b2b2-462e47d3d204","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"09613db2-899a-47bf-ac72-04272a74a635","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-771193 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-771193 --output=json --layout=cluster: exit status 7 (286.35374ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-771193","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-771193","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:25:53.433328  448599 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-771193" does not appear in /home/jenkins/minikube-integration/19529-294791/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-771193 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-771193 --output=json --layout=cluster: exit status 7 (304.792985ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-771193","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-771193","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:25:53.740560  448661 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-771193" does not appear in /home/jenkins/minikube-integration/19529-294791/kubeconfig
	E0828 18:25:53.750663  448661 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/insufficient-storage-771193/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-771193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-771193
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-771193: (1.873119839s)
--- PASS: TestInsufficientStorage (13.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2220079360 start -p running-upgrade-121541 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0828 18:31:15.731712  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2220079360 start -p running-upgrade-121541 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.915308009s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-121541 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0828 18:31:55.757264  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-121541 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.163307711s)
helpers_test.go:175: Cleaning up "running-upgrade-121541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-121541
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-121541: (3.015560208s)
--- PASS: TestRunningBinaryUpgrade (88.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-536521 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-536521 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.376901455s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-536521
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-536521: (1.311489795s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-536521 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-536521 status --format={{.Host}}: exit status 7 (83.487507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-536521 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-536521 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m39.676858596s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-536521 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-536521 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-536521 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (155.683985ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-536521] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-536521
	    minikube start -p kubernetes-upgrade-536521 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5365212 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-536521 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-536521 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-536521 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.159328536s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-536521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-536521
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-536521: (2.143031644s)
--- PASS: TestKubernetesUpgrade (349.05s)

                                                
                                    
x
+
TestMissingContainerUpgrade (175.55s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2747657966 start -p missing-upgrade-453871 --memory=2200 --driver=docker  --container-runtime=containerd
E0828 18:26:15.731667  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2747657966 start -p missing-upgrade-453871 --memory=2200 --driver=docker  --container-runtime=containerd: (1m39.476931033s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-453871
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-453871: (10.32363789s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-453871
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-453871 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-453871 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.565693084s)
helpers_test.go:175: Cleaning up "missing-upgrade-453871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-453871
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-453871: (2.270085161s)
--- PASS: TestMissingContainerUpgrade (175.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-763195 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-763195 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (72.137442ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-763195] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-763195 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-763195 --driver=docker  --container-runtime=containerd: (36.498018327s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-763195 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-763195 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-763195 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.843254551s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-763195 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-763195 status -o json: exit status 2 (375.807533ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-763195","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-763195
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-763195: (2.492003078s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-763195 --no-kubernetes --driver=docker  --container-runtime=containerd
E0828 18:26:55.756791  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-763195 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.68596825s)
--- PASS: TestNoKubernetes/serial/Start (6.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-763195 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-763195 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.573654ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-763195
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-763195: (1.415680247s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-763195 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-763195 --driver=docker  --container-runtime=containerd: (6.508896429s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-763195 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-763195 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.196235ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.803188505 start -p stopped-upgrade-532855 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.803188505 start -p stopped-upgrade-532855 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.103989693s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.803188505 -p stopped-upgrade-532855 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.803188505 -p stopped-upgrade-532855 stop: (19.907779918s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-532855 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-532855 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.336619233s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-532855
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-532855: (1.14636187s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/Start (54.26s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-052147 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-052147 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (54.259836169s)
--- PASS: TestPause/serial/Start (54.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-052147 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-052147 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.206041192s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.23s)

                                                
                                    
x
+
TestPause/serial/Pause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-052147 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-052147 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-052147 --output=json --layout=cluster: exit status 2 (415.945875ms)

                                                
                                                
-- stdout --
	{"Name":"pause-052147","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-052147","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-052147 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.08s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-052147 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-052147 --alsologtostderr -v=5: (1.084818584s)
--- PASS: TestPause/serial/PauseAgain (1.08s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-052147 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-052147 --alsologtostderr -v=5: (2.806976862s)
--- PASS: TestPause/serial/DeletePaused (2.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-052147
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-052147: exit status 1 (25.113328ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-052147: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-860101 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-860101 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (268.375143ms)

                                                
                                                
-- stdout --
	* [false-860101] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:33:24.287215  489077 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:33:24.287337  489077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:33:24.287408  489077 out.go:358] Setting ErrFile to fd 2...
	I0828 18:33:24.287415  489077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:33:24.287675  489077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-294791/.minikube/bin
	I0828 18:33:24.288129  489077 out.go:352] Setting JSON to false
	I0828 18:33:24.289068  489077 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8154,"bootTime":1724861851,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0828 18:33:24.289148  489077 start.go:139] virtualization:  
	I0828 18:33:24.297864  489077 out.go:177] * [false-860101] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0828 18:33:24.300281  489077 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:33:24.300404  489077 notify.go:220] Checking for updates...
	I0828 18:33:24.304095  489077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:33:24.305941  489077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-294791/kubeconfig
	I0828 18:33:24.307748  489077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-294791/.minikube
	I0828 18:33:24.309470  489077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0828 18:33:24.311054  489077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:33:24.313516  489077 config.go:182] Loaded profile config "force-systemd-flag-719666": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0828 18:33:24.313673  489077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:33:24.365954  489077 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 18:33:24.366073  489077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 18:33:24.480181  489077 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-28 18:33:24.470108623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 18:33:24.480295  489077 docker.go:307] overlay module found
	I0828 18:33:24.483299  489077 out.go:177] * Using the docker driver based on user configuration
	I0828 18:33:24.484956  489077 start.go:297] selected driver: docker
	I0828 18:33:24.484979  489077 start.go:901] validating driver "docker" against <nil>
	I0828 18:33:24.484994  489077 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:33:24.487575  489077 out.go:201] 
	W0828 18:33:24.489494  489077 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0828 18:33:24.491171  489077 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-860101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-860101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-860101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-860101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-860101"

                                                
                                                
----------------------- debugLogs end: false-860101 [took: 4.762322249s] --------------------------------
helpers_test.go:175: Cleaning up "false-860101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-860101
--- PASS: TestNetworkPlugins/group/false (5.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (155.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-807226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0828 18:34:58.825899  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:36:15.731614  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:36:55.757060  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-807226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m35.540282001s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (155.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-807226 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f1be814a-1159-493a-99b6-d204a729b812] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f1be814a-1159-493a-99b6-d204a729b812] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005507983s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-807226 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-940663 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-940663 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m3.992877321s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-807226 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-807226 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.298713501s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-807226 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-807226 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-807226 --alsologtostderr -v=3: (12.370244139s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-807226 -n old-k8s-version-807226
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-807226 -n old-k8s-version-807226: exit status 7 (155.566319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-807226 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-940663 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [600ad937-0b6e-4a8d-9119-c8201cd31572] Pending
helpers_test.go:344: "busybox" [600ad937-0b6e-4a8d-9119-c8201cd31572] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [600ad937-0b6e-4a8d-9119-c8201cd31572] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004783665s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-940663 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-940663 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-940663 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.101073424s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-940663 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-940663 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-940663 --alsologtostderr -v=3: (12.123117334s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-940663 -n default-k8s-diff-port-940663
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-940663 -n default-k8s-diff-port-940663: exit status 7 (77.609376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-940663 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-940663 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0828 18:39:18.802609  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:41:15.732114  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:41:55.756987  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-940663 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m27.453946998s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-940663 -n default-k8s-diff-port-940663
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jmgxd" [2246b396-6bdb-46f7-adc9-3ed9125b1acc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004221764s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jmgxd" [2246b396-6bdb-46f7-adc9-3ed9125b1acc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002994987s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-940663 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-940663 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-940663 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-940663 -n default-k8s-diff-port-940663
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-940663 -n default-k8s-diff-port-940663: exit status 2 (332.183761ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-940663 -n default-k8s-diff-port-940663
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-940663 -n default-k8s-diff-port-940663: exit status 2 (315.825584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-940663 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-940663 -n default-k8s-diff-port-940663
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-940663 -n default-k8s-diff-port-940663
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-014747 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-014747 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m5.902123182s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8sp92" [331ca2a9-2b1a-4622-b1be-a1ba7f03693b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003715039s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8sp92" [331ca2a9-2b1a-4622-b1be-a1ba7f03693b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004215834s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-807226 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-807226 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-807226 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-807226 -n old-k8s-version-807226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-807226 -n old-k8s-version-807226: exit status 2 (319.233771ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-807226 -n old-k8s-version-807226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-807226 -n old-k8s-version-807226: exit status 2 (315.052775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-807226 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-807226 -n old-k8s-version-807226
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-807226 -n old-k8s-version-807226
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-942281 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-942281 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m1.344360767s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-014747 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0cc7e39d-4f9b-452b-81d6-c5d1b3d9a64c] Pending
helpers_test.go:344: "busybox" [0cc7e39d-4f9b-452b-81d6-c5d1b3d9a64c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0cc7e39d-4f9b-452b-81d6-c5d1b3d9a64c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004551336s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-014747 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-014747 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-014747 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.278067606s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-014747 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-014747 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-014747 --alsologtostderr -v=3: (12.68894564s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-014747 -n embed-certs-014747
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-014747 -n embed-certs-014747: exit status 7 (90.254681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-014747 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (303.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-014747 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-014747 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (5m3.628864276s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-014747 -n embed-certs-014747
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (303.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-942281 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [39749b72-4d6c-46ba-a3c2-f2f14f8a9ef5] Pending
helpers_test.go:344: "busybox" [39749b72-4d6c-46ba-a3c2-f2f14f8a9ef5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [39749b72-4d6c-46ba-a3c2-f2f14f8a9ef5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003388087s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-942281 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-942281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-942281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.004706078s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-942281 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-942281 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-942281 --alsologtostderr -v=3: (12.072529746s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-942281 -n no-preload-942281
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-942281 -n no-preload-942281: exit status 7 (97.513452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-942281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-942281 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0828 18:46:15.731551  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:46:55.757150  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:27.009981  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:27.016484  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:27.027965  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:27.049439  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:27.091001  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:27.172469  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:27.334573  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:27.656319  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:28.298463  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:29.579812  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:32.141674  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:37.263102  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:47:47.504414  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:07.986363  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:32.933203  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:32.939709  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:32.951092  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:32.972502  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:33.013937  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:33.095496  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:33.257038  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:33.578690  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:34.221065  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:35.502856  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:38.064347  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:43.186114  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:48.948197  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:48:53.428208  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:49:13.910366  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:49:54.872768  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:50:10.871057  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-942281 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m49.114537928s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-942281 -n no-preload-942281
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vpl8r" [92fb7023-c252-47f6-808c-fbd2b10f01a5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00452953s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vpl8r" [92fb7023-c252-47f6-808c-fbd2b10f01a5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004338666s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-014747 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-014747 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-014747 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-014747 -n embed-certs-014747
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-014747 -n embed-certs-014747: exit status 2 (317.387179ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-014747 -n embed-certs-014747
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-014747 -n embed-certs-014747: exit status 2 (337.163268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-014747 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-014747 -n embed-certs-014747
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-014747 -n embed-certs-014747
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-901127 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-901127 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (38.03038291s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-p482p" [608fb352-5029-412f-9913-44a3f4b38fb1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004469005s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-p482p" [608fb352-5029-412f-9913-44a3f4b38fb1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004828297s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-942281 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-942281 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-942281 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-942281 --alsologtostderr -v=1: (1.232816549s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-942281 -n no-preload-942281
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-942281 -n no-preload-942281: exit status 2 (484.543635ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-942281 -n no-preload-942281
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-942281 -n no-preload-942281: exit status 2 (448.97001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-942281 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-942281 --alsologtostderr -v=1: (1.040327854s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-942281 -n no-preload-942281
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-942281 -n no-preload-942281
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (74.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m14.247756177s)
--- PASS: TestNetworkPlugins/group/auto/Start (74.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-901127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-901127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.428745734s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-901127 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-901127 --alsologtostderr -v=3: (1.280268839s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-901127 -n newest-cni-901127
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-901127 -n newest-cni-901127: exit status 7 (65.249721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-901127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (23.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-901127 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0828 18:51:15.731540  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:51:16.794165  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-901127 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (22.871994803s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-901127 -n newest-cni-901127
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (23.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-901127 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-901127 --alsologtostderr -v=1
E0828 18:51:38.835517  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-901127 --alsologtostderr -v=1: (1.640838357s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-901127 -n newest-cni-901127
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-901127 -n newest-cni-901127: exit status 2 (466.750216ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-901127 -n newest-cni-901127
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-901127 -n newest-cni-901127: exit status 2 (465.555431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-901127 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-901127 --alsologtostderr -v=1: (1.183970861s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-901127 -n newest-cni-901127
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-901127 -n newest-cni-901127
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.83s)
E0828 18:56:55.756795  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:56:55.914405  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:19.818467  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:19.824953  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:19.836431  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:19.857912  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:19.899418  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:19.980952  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:20.142662  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:20.464350  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:21.106483  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:22.388562  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:24.950304  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:27.009563  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:30.072138  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:40.314318  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/auto-860101/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0828 18:51:55.757293  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/addons-606058/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m7.025702045s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-860101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-860101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bvgkd" [5020c6a2-2557-4a34-9cd2-ec635436259d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bvgkd" [5020c6a2-2557-4a34-9cd2-ec635436259d] Running
E0828 18:52:27.012546  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003861009s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-860101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m6.531115058s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-j45nv" [8fe16c28-d01b-4623-8097-0a7a688c6753] Running
E0828 18:52:54.714978  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/old-k8s-version-807226/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006509763s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-860101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-860101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f279s" [e4b6addf-e52d-4aa9-a538-da699239c923] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f279s" [e4b6addf-e52d-4aa9-a538-da699239c923] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004211904s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-860101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.861738887s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-j4z6h" [6e2b52bd-06b1-49c9-b8c0-8016965d9939] Running
E0828 18:54:00.635658  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/default-k8s-diff-port-940663/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005345546s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-860101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-860101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bjc59" [d25f3ad6-c934-4ba2-9f4f-704e398e477c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bjc59" [d25f3ad6-c934-4ba2-9f4f-704e398e477c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005187308s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-860101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-860101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-860101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4vfp6" [9167240d-02c7-4b31-85eb-5a409e8f5667] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4vfp6" [9167240d-02c7-4b31-85eb-5a409e8f5667] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003906318s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m17.018716593s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-860101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0828 18:55:33.977061  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:33.983503  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:33.994887  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:34.016287  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:34.057746  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:34.139163  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:34.300627  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:34.622312  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:35.264361  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:36.545987  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:39.107507  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:44.229769  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:55:54.471232  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (52.795806909s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-860101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-860101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rkmxc" [b8a0083d-6af8-4c90-8ccd-846bc36d413c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0828 18:55:58.804141  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-rkmxc" [b8a0083d-6af8-4c90-8ccd-846bc36d413c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003842373s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-98zbr" [a47d303b-7238-4842-a5ab-3e5048fb7605] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004792955s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-860101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-860101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-860101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c8m68" [df25b731-251c-46a7-8b80-d641df1a45b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c8m68" [df25b731-251c-46a7-8b80-d641df1a45b4] Running
E0828 18:56:14.953016  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/no-preload-942281/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:56:15.731734  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/functional-160288/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004564358s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-860101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-860101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m18.24147744s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-860101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-860101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zjb7p" [5454cbfe-c864-4f93-b100-e5e137e568fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zjb7p" [5454cbfe-c864-4f93-b100-e5e137e568fc] Running
E0828 18:57:53.535726  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/kindnet-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:53.542171  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/kindnet-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:53.553529  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/kindnet-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:53.574988  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/kindnet-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:53.616498  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/kindnet-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:53.697943  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/kindnet-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:53.859500  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/kindnet-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:54.181405  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/kindnet-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:54.823149  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/kindnet-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:56.105348  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/kindnet-860101/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:57:58.667301  300182 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-294791/.minikube/profiles/kindnet-860101/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003955698s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-860101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-860101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-843459 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-843459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-843459
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-491314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-491314
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-860101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-860101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-860101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-860101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-860101"

                                                
                                                
----------------------- debugLogs end: kubenet-860101 [took: 4.162351723s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-860101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-860101
--- SKIP: TestNetworkPlugins/group/kubenet (4.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-860101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-860101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-860101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-860101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-860101"

                                                
                                                
----------------------- debugLogs end: cilium-860101 [took: 5.089625937s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-860101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-860101
--- SKIP: TestNetworkPlugins/group/cilium (5.42s)

                                                
                                    
Copied to clipboard